Load Testing with HammerDB

posted June 23, 2015, 10:13 AM by

Mike Stone (@HoBMStone), CIO & Principal Architect

When we are asked to introduce new database technology or a new architecture at an existing customer site, some of the first questions we’re always asked are around performance. Of course there’s plenty of good information about hardware performance, clock speeds, latency, and throughput in the product marketing materials, but how does that really translate to my database or business critical performance? Now, maybe I’m old school, but I spent a good part of my education learning how to answer that question. If you’re not asking the question, you probably should be.

The bad news is that, as with most IT questions, the answer is “it depends”…

  • Is your workload I/O dependent?
  • Will more CPUs help or hinder?
  • Will adding more memory help?
  • Do you need Fiber/SSD or will Copper/SATA/RAID do the trick?
  • Do you need throughput or IOPS?
  • Do you need horizontal scalability or will vertical work?

 

Most people default to buying what they perceive as the best and fastest hardware they can afford and hope it all works out. The only good news is, that when the system is properly configured, it often does work out. The next big problem however, is that software vendors are working harder than ever to capitalize on your hardware purchase! So now over-buying hardware can have a disastrous effect on your IT budget when it comes to licensing the software to run on it. Even if you dodged the bullet on the hardware acquisition, the Microsofts and Oracles are there waiting to “help you out” with that budget surplus. And their bill can easily add up to multiples of the hardware cost.

So back to where we started – it is a good idea to consider not only how much and what type of hardware you can afford, but to also ask the question “How little can I get away with?” House of Brick has made it our business to help customers answer both sides of this question for almost two decades, and one of the tools we use to help figure it out is HammerDB.

HammerDB is a free, open-source tool that allows us to generate a synthetic database workload based on the industry standard TPC-C (OLTP) and TPC-H (DSS/DWH) benchmarks. We also use HammerDB as a kind of relative benchmark. Although we can’t publish results for most software due to licensing restrictions, we can compare our experience with a set of guidelines to determine if we feel performance of a given architecture will be adequate in the general case. We have also used Swingbench for Oracle workloads, but we more often lean toward HammerDB (formerly HammerOra) because it supports other platforms in addition to Oracle. It also supports SQL Server, TimesTen, PostgreSQL, Greenplum, Postgres Plus Advanced Server, MySQL, Redis and Trafodion SQL on Hadoop, by utilizing an extensible framework of tooling. Although Dominick Giles is a great guy and has produced an amazing tool in Swingbench, he’s only one guy. HammerDB has a community dedicated to enhancing and supporting it.

The documentation for HammerDB is extensive and well written. It covers not only the use of the tool with various software solutions, but also general testing methodology, strategy and planning.

A synthetic workload is good for highlighting problems with architecture or configuration, and for establishing relative performance within a set of guidelines. But, of course, the only way to truly know what the “performance” will be for your particular application is to test it out with your workload.

If you really need to know how your app will perform on a new architecture, then there are some rather expensive tools out there to help you create test scenarios and capture workload activity from your existing application at various layers. While these tools certainly have their place, you’re going to pay a premium not only for licensing, but also in project costs to use and maintain the tool as well as your testing scenarios. However, depending upon your specific needs, the HammerDB Oracle functionality does include a rudimentary capture/replay capability to collect workload activity from one Oracle database server and replay it on another, which can be a good alternative to these expensive options in some cases.

Because they’ve done such a good job with documentation, there’s not much more to cover in this blog article, except to say that if you’re new to database load testing, I recommend you start with this guide – Introduction to Transactional (TPC-C) Testing for all Databases.

House of Brick’s roots run deep in system performance testing and benchmarking and we work to ensure that development products are not only deployable, but also run on cost-effective hardware. With that philosophy in mind, we take advantage of free tools like HammerDB wherever possible to maximize the value of our engagements for our customers and we encourage you to do the same in your own environments.

 

Share with your networkTweet about this on TwitterShare on LinkedInShare on FacebookDigg thisEmail this to someone

4 Comments

  • HammerDB’s capture/replay capability works the same as Oracle RAT and other similar tools in terms of constraints and sequences.

    The proper procedure is as follows:

    1. Before starting your workload capture, make sure you have a good backup of the source database from which you intend to capture the workload.

    2. Capture the workload of interest for a period of time, then stop the capture.

    3. Perform post-capture functions to prepare the workload for replay.

    4. Restore the backup of the source database to a test location and apply archived redo up to the moment in time when the capture was started. This should minimize replay errors due to constraint and sequence issues. You could also make a backup of the test database at this point to facilitate resetting it for subsequent tests.

    5. Replay the captured workload against the test database.

    6. Repeat steps 4 and 5 as needed to test different variables.

  • MCH says:

    I need to capture load from a high volume commerce site. I want to replay on a standby database. Can hammerdb support this without constraint errors.

  • Muhaimin says:

    do you have any experience for tpc-b?

    • According to the TPC website, that benchmark became obsolete in 1995:
      http://www.tpc.org/tpcb

      It was an attempt to quantify “batch” transactions per second with a focus on I/O as well as transactional integrity. It’s purpose was not clearly defined and the assumptions were too broad to be useful. Therefore it was deprecated and is not supported by any modern tooling I am aware of.

      The combination of TPC-C and TPC-H essentially replace the TPC-B and can be run independently or simultaneously to achieve a realistic mixed workload that incorporates the essence of the older test.

Leave a Reply

Your email address will not be published. Required fields are marked *

WANT TO LEARN MORE?

Share with your networkTweet about this on TwitterShare on LinkedInShare on FacebookDigg thisEmail this to someone