Mike Stone (@), CIO & Principal Architect
When we are asked to introduce new database technology or a new architecture at an existing customer site, some of the first questions we’re always asked are around performance. Of course there’s plenty of good information about hardware performance, clock speeds, latency, and throughput in the product marketing materials, but how does that really translate to my database or business critical performance? Now, maybe I’m old school, but I spent a good part of my education learning how to answer that question. If you’re not asking the question, you probably should be.
The bad news is that, as with most IT questions, the answer is “it depends”…
- Is your workload I/O dependent?
- Will more CPUs help or hinder?
- Will adding more memory help?
- Do you need Fiber/SSD or will Copper/SATA/RAID do the trick?
- Do you need throughput or IOPS?
- Do you need horizontal scalability or will vertical work?
Most people default to buying what they perceive as the best and fastest hardware they can afford and hope it all works out. The only good news is, that when the system is properly configured, it often does work out. The next big problem however, is that software vendors are working harder than ever to capitalize on your hardware purchase! So now over-buying hardware can have a disastrous effect on your IT budget when it comes to licensing the software to run on it. Even if you dodged the bullet on the hardware acquisition, the Microsofts and Oracles are there waiting to “help you out” with that budget surplus. And their bill can easily add up to multiples of the hardware cost.
So back to where we started – it is a good idea to consider not only how much and what type of hardware you can afford, but to also ask the question “How little can I get away with?” House of Brick has made it our business to help customers answer both sides of this question for almost two decades, and one of the tools we use to help figure it out is HammerDB.
HammerDB is a free, open-source tool that allows us to generate a synthetic database workload based on the industry standard TPC-C (OLTP) and TPC-H (DSS/DWH) benchmarks. We also use HammerDB as a kind of relative benchmark. Although we can’t publish results for most software due to licensing restrictions, we can compare our experience with a set of guidelines to determine if we feel performance of a given architecture will be adequate in the general case. We have also used Swingbench for Oracle workloads, but we more often lean toward HammerDB (formerly HammerOra) because it supports other platforms in addition to Oracle. It also supports SQL Server, TimesTen, PostgreSQL, Greenplum, Postgres Plus Advanced Server, MySQL, Redis and Trafodion SQL on Hadoop, by utilizing an extensible framework of tooling. Although Dominick Giles is a great guy and has produced an amazing tool in Swingbench, he’s only one guy. HammerDB has a community dedicated to enhancing and supporting it.
The documentation for HammerDB is extensive and well written. It covers not only the use of the tool with various software solutions, but also general testing methodology, strategy and planning.
A synthetic workload is good for highlighting problems with architecture or configuration, and for establishing relative performance within a set of guidelines. But, of course, the only way to truly know what the “performance” will be for your particular application is to test it out with your workload.
If you really need to know how your app will perform on a new architecture, then there are some rather expensive tools out there to help you create test scenarios and capture workload activity from your existing application at various layers. While these tools certainly have their place, you’re going to pay a premium not only for licensing, but also in project costs to use and maintain the tool as well as your testing scenarios. However, depending upon your specific needs, the HammerDB Oracle functionality does include a rudimentary capture/replay capability to collect workload activity from one Oracle database server and replay it on another, which can be a good alternative to these expensive options in some cases.
Because they’ve done such a good job with documentation, there’s not much more to cover in this blog article, except to say that if you’re new to database load testing, I recommend you start with this guide – Introduction to Transactional (TPC-C) Testing for all Databases.
House of Brick’s roots run deep in system performance testing and benchmarking and we work to ensure that development products are not only deployable, but also run on cost-effective hardware. With that philosophy in mind, we take advantage of free tools like HammerDB wherever possible to maximize the value of our engagements for our customers and we encourage you to do the same in your own environments.