We test and automate promising technologies, often before they are ready for prime time. Then we can offer it to your software shop, without all the hiccups and slowdowns.
df.lab: Chasing technology
We're extremely lucky to get to do this work. If we can help someone else with what we learn, it's all worthwhile.
What follows below is a sampling of the types of projects we get to work on.
We Fail First, So You Can Succeed
Lots of stuff we test and automate isn't ready for prime time.
- We love OSGi, for example - have used it successfully for years, but it's not something we'd recommend for our typical customers. We provide an alternate path for modularity, even an in-between path. We're also finding lots of ways to fail and succeed with linux containers, which accomplishes much of the same modularity goals of OSGi.
- We love Drupal, and have used it to power up dozens of sites, including this one. But we'd always recommend Wordpress for our customers first, where that would be sufficient.
- We've used lots of technologies before they matured into something usable. Maven and Chef were once quite immature, despite their current success.
- Dynamic languages, functional languages, we've experimented with several.
Working the kinks out:
We've spent a lot of time working in the devOps area, and we feel that this has a lot to offer our customers, right here and right now. Here are some of the technologies we've been working with for the past couple of years, many are actually ready for prime time.
- Test Kitchen
- LWRP creation
- data containers
- service discovery
- Ruby development
We don't just play with the tech. Take a look at our githup repository, you'll find dozens of our own Chef recipes and Docker scripts. We hope that better versions of community Chef recipes make our own recipes obsolete, some day. In some cases, they already have.
Would you bet on what can, and cannot be automated? We have! We've successfully automated many common tasks. This is not always popular, developers and operations engineers are not always fond of automation. We are fortunate that others such as Spring Boot are even more aggressive than we are, which acts to soften the resistance we would otherwise encounter.
- Models and schemas generating downstream code such as views, controllers, handler code. And vis-versa.
- Hadoop ETL automation
- Source code template automation
- Build Automation
- Tooling, Eclipse Plugins
- Parsers, Transformers
Lots of people think analytics is just market analysis, but we see more than that here. We're big on extending the rules space into the options world. We use OptaPlanner and a scoring system to analyze options that would be too big for teams to consider without the extra help. Can this be a viable, growing field to extend into other areas? Only the future will tell.
We're big fans of shops like Thoughtworks, where guys like Martin Fowler and Jez Humble get to work out lots of approaches that carry promise for teams everywhere.
But how does all that stuff really work? When we use it in our own software, how much does it add to the costs? What are the downstream savings? What kinds of changes in thinking and muscle memory does this require? These are all questions that are just too expensive to research at most software shops. They've got work to do.
Here, we get a feel for what does and does not work. We've been experimenting with many forms of testing, test first, different types of testing, tools for testing, such as that. We've learned a lot, and not all of it is good. But here where it's safe, we get to put some of that behind us, before pushing out into our customers regimens. Continuous Delivery in it's many forms.
What Are the Fundamentals of Data?
When are certain types of schemas better than others?
When does Schema on Read beat Schema on Write?
Which types of databases and messaging systems work best for which types of situations?
Is the big push for immutable data and functional programming a fad, or a permanent shift?
Which of many NOSQL variants makes the best sense for which sitiuations?
When should I consider denormalized data and materialized views? How to wrestle with duplicate data?
What indexing technologies and strategies work best for this data set?
These are all fair questions. We get to experiment on these questions here in df.lab.
We're doing some exiting things now, but you can look back in history and see lots of interesting pieces from over a decade back. It's our DNA.
Doesn't always pan out, but that's the point. We do it here, first. When we get the kinks worked out, maybe we can use it to help your shop along.