9 Key Questions Every IT Department Should Ask in 2019

Jim Hannan (@jim_m_hannan), Principal Architect; Nick Walter, Principal Architect; and Bob Lindquist, Director of Client Services

In this blog we take a look at some trends that we have observed in the industry over the past year, and what we believe 2019 may bring for many IT organizations. Here are some of the most important questions we think IT departments should be asking in 2019.

Q1: Are we paying too much for databases licenses?

A1: We often find that customers are paying too much for their Oracle or SQL Server licenses, both for initial purchases and annual support payments. Overpayment can either be due to owning more licenses than are actually in use, including those for products that are no longer needed, or from an unoptimized environment where the processor-based software is deployed on more cores than are actually required.

Q2: Can we expect to be audited by Oracle or Microsoft this year?

A2: Software vendors in general seem to have become enamored with the lever of the software audit to pry loose additional revenues from their customers. Oracle’s audit tactics seem to be particularly aggressive, as noted in recent press articles. If you have not been audited by your software vendor in the past three years, then one of two possibilities may exist: 1) you are overpaying and the vendor is leaving you alone so as not to disrupt their revenue stream; or 2) you should expect and prepare for an audit within the coming year. House of Brick is seeing increased audit activity for Oracle, SQL Server, and VMware among others. Our software audit defense services seem to be a welcome support for our clients in these situations.

Q3: Why do we need a cloud strategy in 2019?

A3: The reality that we are seeing among our customers is that organizations without an intentional cloud strategy often end up having one forced upon them in less-than-ideal circumstances.  Whether due to a C-level mandate, an expiring datacenter lease, a sudden need to pivot away from legacy technologies, or some other reason, cloud discussions will be at the forefront in 2019.  The clients we help who have conducted hurried cloud migrations without a prepared and polished cloud strategy in place, predictably experience worse outcomes, with escalated costs and a greater risk of performance or operational degradation.

Q4:  What are good entry points for exploring the cloud?

A4:  Organizations that are locked into on-premises datacenters for the next few years, due to factors such as long-term property leases or constrained budgets, should still work on formulating a solid cloud strategy. The strategy may start small by leveraging bundled cloud solutions for immediate scalability, operational improvements, or cost savings.  One popular entry point is to leverage cloud-based solutions for file-sharing, archival, or backup platforms.  Cloud storage is inexpensive and meets the requirement for off-premises storage of backups. Most backup software vendors support cloud storage as a backup target (or soon will), so many of our clients discover that they already have this functionality available.

Another popular path to begin to leverage the cloud is to migrate a single application as a test case for discovering the strengths and weaknesses of the cloud as they apply to your organization. A mistake we have seen many organizations make is to “lift and shift” that application into the cloud. A lift-and-shift strategy usually results in a cloud deployment that is over-provisioned with potentially poor performance. Take the time to evaluate the comparative strengths of each cloud provider you are evaluating against the needs of the application you are migrating. An example is licensing for an Oracle-based application. Do you know how Oracle will be licensed in the cloud? Does Oracle licensing even work with the cloud provider you are evaluating? Completing a single application migration allows an organization to start laying the foundation for a successful cloud strategy that considers all of the details and requirements of your environment, including operational, resourcing, financial, customer service, regulatory, and security issues.

Q5: Should we replace vendor-supplied High Availability (HA) or Disaster Recovery (DR) solutions with other technologies?

A5: Most customers have already or are in the process of virtualizing their databases. As part of virtualizing using vSphere, solutions such as VMware HA may be considered for replacing technologies like Oracle RAC or SQL Server AlwaysOn for high availability. Similarly, Oracle Data Guard may be replaced by technologies like VMware SRM and array-based data replication. Oftentimes our clients find that vendor HA or DR solutions tend to be expensive, difficult to manage, and not necessary or applicable for all applications.

Q6: What is the industry direction for DR strategy and solutions?

A6: As mentioned above, many organizations are considering DR technologies that are being pushed lower in the application stack by leveraging array-based data replication, and recovery tools like VMware SRM. What has been so compelling about these solutions, and we believe will continue to be in 2019, is the ease of implementation and simplification of DR run books. In the past, recovery plans have been complicated and out-of-date almost as soon as they were published.

Another exciting trend in disaster recovery is the use of cloud-based solutions as a DR platform. Cloud-based DR may help to avoid the expenses of provisioning a datacenter that may never be used except in an emergency.  Cloud providers are viewed as ideal DR targets by many organizations because of their consumption-based billing models. In the cloud-based consumption model, a customer would only pay for full resources when the solution is actually in use (e.g., during a recovery test, or in an actual disaster failover). This could generate immense savings over dedicated disaster recovery solutions requiring the acquisition and provisioning of multiple data centers. House of Brick has recently been conducting Cloud DR strategy development engagements with our clients. Many times, these engagements receive full or partial funding by the cloud provider who is eager to showcase their DR capabilities.

Q7: Is our hardware right-sized for processor-based software licensing? What are the advantages of newer Intel chips?

A7: House of Brick is seeing performance improvements on newer Intel-based servers of up to two times faster, compared to chips that are only three-years old. Newer Intel chips offer significant performance and security improvements. What does this mean for IT budgets? Our clients are realizing performance enhancements, and cost savings with newer hardware by shrinking the number of cores that have to be licensed for the vendor software. For deploying Oracle workloads into the VMware Cloud on AWS, we discussed several options for containing and reducing the licensing requirements in one of our latest white papers. Similar strategies may work with SQL Server deployments as well.

Q8: Where can we look for ways to reduce expenses in our IT budget?

A8: As most organizations are painfully aware, the single largest line items in IT budgets are for vendor software purchases and support. We have the privilege of working with many clients to assess and optimize their Oracle, SQL Server, and VMware deployments. While conducting these projects, we have learned that many times a client can reduce their overall budget by reducing software purchase and support payments through the optimization of infrastructure design. This may involve an increase in the hardware budget, but the cost of the newer infrastructure is usually more than offset by the resulting software savings. In fact, we find that analysis of the entire application stack presents the best opportunity for maximizing budget savings. In addition to license compliance verification, as part of our licensing assessment projects, we evaluate the hardware infrastructure design in order to right-size and optimize the amount of software licenses needed.

Q9: We are looking at hyperconverged hardware for our databases. What hardware, architecture and licensing information should we be considering?

A9: As the industry continues to adopt hyperconverged infrastructure, it is important to understand the differences between these systems and more traditional architectures. This includes differences in CPU utilization, the use of Software Defined Storage (SDS), and I/O throughput for business-critical databases. An example of something to evaluate when considering a hyperconverged platform is that the CPU running your database will also be tasked with storage control processes. In traditional or converged platforms, the storage controllers were located on the storage array, and did not have to be licensed for vendor software. The database processes never had to fight for CPU time with the storage controller. Considering the true cost of expensive processor-based database licenses (whether that is Oracle, SQL Server, or others), depriving those workloads of CPU cycles for anything not related to their core operations will result in a higher than expected effective price per CPU cycle. If you are considering hyperconverged platforms, we encourage you to consider all impacts, and to architect the solution as much as possible, in order to dedicate vital resources to the most expensive processes.

In this blog post, we touched on a number of important topics with the intent of helping you prepare for what may come in 2019. We look forward to some surprises as well, so check our resources page throughout the year as we explore developing technologies, expanding service offerings, evolving solutions, and exciting challenges through our white papers, blogs, webinars and presentations.

If you have questions or feedback regarding any of the topics discussed in this post, please let us know in the comments section below.

Table of Contents

Related Posts