Conor O'Mahony's Database Diary

Your source of IBM database software news (DB2, Informix, Hadoop, & more)

Archive for the ‘Hardware’ Category

Anatomy of an Oracle Marketing Claim

with 10 comments

Yesterday, Oracle announced a new TPC-C benchmark result. They claim:

In this benchmark, the Sun Fire X4800 M2 server equipped with eight Intel® Xeon® E7-8870 processors and 4TB of Samsung’s Green DDR3 memory, is nearly 3x faster than the best published eight-processor result posted by an IBM p570 server equipped with eight Power 6 processors and running DB2. Moreover, Oracle Database 11g running on the Sun Fire X4800 M2 server is nearly 60 percent faster than the best DB2 result running on IBM’s x86 server.

Let’s have a closer look at this claim, starting with the first part: “nearly 3x faster than the best published eight-processor result posted by an IBM p570 server“. Interestingly, Oracle do not lead by comparing their new leading x86 result with IBM’s leading x86 result. Instead they choose to compare their new result to an IBM result from 2007, exploiting the fact that even though this IBM result was on a different platform, it uses the same number of processors. Of course, we all know that the advances in hardware, storage, networking, and software technology over half a decade are simply too great to form any basis for reasonable comparison. Thankfully, most people will see straight through this shallow attempt by Oracle to make themselves look better than they are. I cannot imagine any reasonable person claiming that Oracle’s x86 solutions offer 3x the performance of IBM’s Power Systems solutions, when comparing today’s technology. I’m sure most people will agree that this first comparison is simply meaningless.

Okay, now let’s look at the second claim: “nearly 60 percent faster than the best DB2 result running on IBM’s x86 server“. Oracle now compare their new leading x86 result with IBM’s leading x86 result. However, if you look at the benchmark details, you will see that IBM’s result uses half the number of CPU processors, CPU cores, and CPU threads. If you look at performance per core, the Oracle result achieves 60,046 tpmC per CPU core, while the IBM result achieves 75,367 tpmC per core. While Oracle claims to be 60% faster, if you take into account relevant system size and determine the performance per core, IBM is actually 25% faster than Oracle.

Finally, let’s not forget the price/performance metric from these benchmark results. This new Oracle result achieved US$.98/tpmC, whereas the leading IBM x86 result achieved US$.59/tpmC. That’s correct, when you determine the cost of processing each transaction for these two benchmark results IBM is 39% less expensive than Oracle. (BTW, I haven’t had a chance yet to determine if Oracle Used their Usual TPC Price/Performance Tactics for this benchmark result, as the result details are not yet available to me; but if they have, the IBM system will prove to be even less expensive again than the Oracle system.)

Benchmark results are as of January 17, 2012: Source: Transaction Processing Performance Council (TPC),
Oracle result: Oracle Sun Fire X4800 M2 server (8 chips/80 cores/160 threads) – 4,803,718 tpmC, US$.98/tpmC, available 06/26/12.
IBM results: IBM System p 570 server (8 chips/16 cores/32 threads) -1,616,162 tpmC, US$3.54 /tpmC, available 11/21/2007. IBM System x3850 X5 (4 chips/40 cores/80 threads) – 3,014,684 tpmC, US$.59/tpmC, available 09/22/11.


Written by Conor O'Mahony

January 18, 2012 at 11:01 am

Comparing “New Big Data” with IMS on the Mainframe

with 2 comments

While it does not come up often in today’s data management conversations, the IMS database software is at the heart of many major corporations around the world. For many people, it is the undisputed leader for mission-critical, enterprise transaction and data-serving workloads. IMS users routinely handle peaks of 100 million transactions in a day, and there are quite a few users who report more than 3,000 days without unplanned outages. That’s more than 8 years without an unplanned outage!

IBM recently announced IMS 12, claiming peak performance at a remarkable 66,000 transactions per second. The new release features improved performance and CPU efficiency for most IMS use cases, and a significant improvement in performance for certain use cases. For instance, the Fast Path Secondary Index means that workloads that use this secondary index are 60% faster.

It is interesting to compare the performance of IMS with the headline-grabbing “big data” solutions that are all the rage today. For instance, at the end of August this year, we read how Beyonce Pregnancy News Births New Twitter Record Of 8,868 Tweets Per Second. I am not saying that IMS can replace the infrastructure of Twitter. Far from it. However, I am saying that, when you consider that IMS can handle 66,000 transactions per second, the relative performance levels of the “new big data” solutions when compared with IMS are food for thought. Especially when you consider the very significant infrastructure in place at Twitter, and the staff needed to manage that infrastructure. And don’t forget that IMS supports these performance levels with full read-write capability, full data integrity, and mainframe-level security.

I appreciate that many of today’s Web-scale businesses begin with capital investments that preclude the hardware and software investments required for something like IMS. These new businesses need to be relatively agile, and depend upon the low barrier of entry that x86-based systems and open source/inexpensive software afford. However, I still think it interesting to put this “new big data” in perspective.

Written by Conor O'Mahony

November 9, 2011 at 2:17 pm

New IBM Smart Analytics Systems

leave a comment »

Oracle garnered a lot of headlines a couple of weeks ago with their Oracle Database Appliance. It didn’t take long for SmarterQuestions to indicate why the IBM Smart Analytics Systems are A Smarter Database System for SMB Clients.

Recently, IBM added the following systems:

  • IBM Smart Analytics System 5710, which is an x86-based Linux system
  • IBM Smart Analytics System 7710, which is a Power Systems-based UNIX system
  • IBM Smart Analytics System 9710, which are mainframe-based systems

These systems include everything you need to quickly set up a data warehouse environment, and to quickly have your business analysts working with the data.

On top of the servers and storage, it includes database and data warehouse software, Cognos software, cubing services, data mining capabilities, and text analytic capabilities. And it is available on your platform of choice (Linux, UNIX, or mainframe). It is also competitively priced, when you consider that the starting price for the 5710 is under $50k, just like the Oracle appliance. However, the IBM system includes all of the necessary software, whereas with the Oracle appliance you have to purchase the very expensive Oracle Database software separately. And the Oracle Database software is not exactly inexpensive.

If you want to learn more, please visit the IBM Smart Analytics Systems Web page.

Written by Conor O'Mahony

October 13, 2011 at 11:26 am

Most Popular Presentation from IDUG DB2 Tech Conference 2011 in Anaheim

leave a comment »

The International DB2 User Group (IDUG) is presenting a free webcast featuring the most popular presentation from the most recent IDUG DB2 Tech Conference, as voted by attendees at the conference. Suresh Sane will present his A DB2 10 Customer’s Experience presentation, which will describe his experiences with DB2 10 for z/OS, including:

  • How new SQL features help
  • How hash access speeds up queries against large tables
  • How access path determination is now smarter
  • How concurrency is improved without sacrificing integrity
  • How temporal tables simplify code

This webcast is a must-see for Database Administrators and Application Developers. It is filled with rich content, helpful hints and tips. As a special bonus, everyone who registers for the Webcast will receive a complimentary copy of the Business Value of DB2 10 – Smarter Database for a Smarter Planet report by Julian Stuhler, Triton Consulting. The Webcast will take place at 11am ET on Wednesday, 02 November 2011. To register for the Webcast, please go to DB2 10 Application Topics—A DB2 10 Customer’s Experience.

Written by Conor O'Mahony

October 6, 2011 at 12:24 pm

What Happens when you pair Netezza with DB2 for z/OS?

with one comment

The American Association of Retired Persons (AARP) recently paired Netezza with their transactional environment, which includes DB2 for z/OS, and achieved remarkable results. Often when you read customer success stories, you are bombarded with metrics. The AARP success story has those metrics:

  • 1400% improvement in data loading time
  • 1700% improvement in report response time
  • 347% ROI in three years

But metrics tell only part of the story. And sometimes the story gets a lot more interesting when you dig a little deeper.

AARP had been using Oracle Database for their data warehouse. But their system simply could not keep up with the demand. As Margherita Bruni from AARP says, “our analysts would run a report, then go for coffee or for lunch, and, maybe if they were lucky, by 5:00 p.m. they would get the response—it was unacceptable—the system was so busy writing the new daily data that it didn’t give any importance to the read operations performed by users.” The stresses on their system were so great that in 2009 alone, their Oracle Database environment had more than 30 system failures. To compound matters, these system performance issues meant that full backups were not possible. Instead AARP would back up only a few critical tables, which is a less than desirable disaster recovery scenario. Clearly, something had to be done.

AARP chose to move their 36 TB data warehouse to Netezza. You can see from the metrics above that they achieved remarkable performance improvements. But what do those performance improvements mean. Well, for the IT staff, they mean that they are relieved of a huge daily burden. Their old system required one full-time database administrator (DBA) and one half-time SAN network support person. These people are now, for the most part, free to work on other projects. And more importantly, they don’t have to deal with the stress of the old environment any more.

But the benefits are not being enjoyed only by the IT staff. They are also being enjoyed by the business analysts, who according to Bruni “could not believe how quickly results were provided—they were so shocked that their work could be accomplished in a matter of hours rather than weeks that, initially, they thought data was cached.” She goes on to say that “one analyst, who is now a director, told us that he used the extra time for other projects, which ultimately helped him become more successful and receive a promotion.” Now that is what I call a great impact statement. The metrics are great, but when someone is freed up to do work that gets them a promotion, that’s a very tangible illustration of the difference that Netezza can make.

Another illustration of the difference is the impact it had on the group that implemented the Netezza system. As Bruni says “after we moved to IBM Netezza, the word spread that we were doing things right and that leveraging us as an internal service was really smart; we’ve gained new mission-critical areas, such as the social-impact area which supports our Drive to End Hunger and Create the Good campaigns.” It certainly looks like you can add IT management to the list of constituents who have had a positive career impact as a result of moving from Oracle Database to IBM Netezza.

For more information about this story, see AARP: Achieving a 347 percent ROI in three years from BI modernization effort.

Written by Conor O'Mahony

September 27, 2011 at 10:33 am

More Organizations Move up to the Mainframe and DB2 for z/OS

with 2 comments

Any of you who are familiar with DB2 on the mainframe (officially known as DB2 for z/OS) know how efficient it is. The mainframe is not for every organization. However, for those organizations for whom the mainframe is a good fit, the tremendous levels of efficiency, reliability, availability, and security directly translate into significant cost savings.

Database software on the mainframe may be relatively boring when compared with the data management flavor of the day (whether it is Hadoop or any of the other technologies associated with Big Data). But when it comes to storing mission-critical transactions, nothing beats the ruthless efficiency of the mainframe. And that boring, ruthless efficiency has been winning over organizations.

Earlier this year, eWeek reported how the IBM Mainframe Replaces HP, Oracle Systems for Payment Solutions. In this article, eWeek describe how Payment Solution Providers (PSP) from Canada chose DB2 for z/OS over Oracle Database on HP servers. A couple of items in this article really catch the eye. One is that the operational efficiencies of the mainframe are expected to lower IT costs up to 35 percent for PSP. The other is that PSP’s system can now process up to 5,000 transactions per second.

Another organization who moved in the same direction is BC Card—Korea’s largest credit card company. The Register ran a story about how a Korean bank dumps Unix boxes for mainframes. BC Card is a coalition of 11 South Korean banks that handles credit card transactions for 2.62 million merchants and 40 million credit card holders in the country. They dumped their HP and Oracle Sun servers in favor of an IBM mainframe. In an accompanying IBM press release, it was revealed that IBM scored highest in every benchmark test category from performance to security to flexibility. Another significant factor in moving to the mainframe is the combination of the utility pricing that lets customers activate and deactivate mainframe engines on-demand, together with software pricing that scales up and down with capacity.

Despite continual predictions to its demise, it has been reported that the mainframe has experienced one of its best years ever, with an increase in usage (well, technically MIPS) of 86% from the same time in 2010. Much of this growth is coming from new customers to the mainframe. In fact, since the System z196 mainframe started shipping in the third quarter of 2010, IBM has added 68 new mainframe customers, with more than two-thirds of them consolidating from distributed systems.

It may not be as exciting as the newest technology on the block, but it is difficult to beat the reliability and efficiency of the mainframe. Especially when you are faced with the realities of managing a relatively large environment, and all of the costs associated with doing so. And don’t forget, the mainframe can provide you with a hierarchical store, a relational store, or a native XML store. And when you combine the security advantages and the 24×7 availability, together with cost efficiency, it makes for an interesting proposition.

Written by Conor O'Mahony

September 19, 2011 at 8:30 am

Industry Benchmark Result for DB2 pureScale: SAP Transaction Banking (TRBK) Benchmark

with 7 comments

A couple of years ago, IBM introduced the pureScale feature, which provides application cluster transparency (allowing you to create shared-disk database clusters). At the time, IBM had taken their industry-leading clustering architecture from the mainframe, and brought it to Unix environments. IBM subsequently also brought it to Linux environments.

Today, IBM announced its first public industry benchmark result for this cluster technology. IBM achieved a record result for the SAP Transaction Banking (TRBK) Benchmark, processing more than 56 million posting transactions per hour and more than 22 million balanced accounts per hour. The results were achieved using IBM DB2® 9.7 on SUSE Linux® Enterprise Server. The cluster contained five IBM System x 3690 X5 database servers, and used the IBM System Storage® DS8800 disk system. The servers were configured to take over workload in case of a single system failure, thereby supporting high application availability. For more details, see the official certification from SAP.

Written by Conor O'Mahony

September 12, 2011 at 11:16 am

%d bloggers like this: