Conor O'Mahony's Database Diary

Your source of IBM database software news (DB2, Informix, Hadoop, & more)

Archive for the ‘Performance’ Category

Baltic Bank Moves from Oracle to DB2 to Improve Performance, Lower Costs, and Increase Availability

with one comment

JSC Rietumu Banka is one of the largest banks in the Baltic states. They recently migrated their data from Oracle Database on Sun servers to IBM DB2 on Power Systems servers, and enjoyed the following bebefits:

  • Up to 30 times faster query performance
  • 20-30% reduction in total cost of ownership
  • 200% improvement in data availability

Like many major banks, JSC Rietumu Banka faced recent pressure to reduce IT costs. In particular, they were concerned with total cost of hardware, software, and staffing for their banking applications which used Oracle Database on Sun servers. After a thorough technical and financial evaluation, JSC Rietumu Banka chose to migrate their environment to DB2 on Power Systems servers.

Of course, the ease of migration was a significant factor in JSC Rietumu Banka being able to achieve these benefits. For more information about the “compatibility features” that make it easy to migrate from Oracle Database to IBM DB2, see Gartner: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations.

To learn more about this specific migration, read the full IBM case study.


Written by Conor O'Mahony

March 4, 2012 at 9:05 pm

NYSE Euronext uses Netezza to Manage their “Big Data”

leave a comment »

NYSE Euronext operates multiple securities exchanges, including the New York Stock Exchange and Euronext. As you might imagine, securities exchanges present significant data management challenges. But NYSE Euronext didn’t just want to have a transactional system, they wanted to do much more with their data, further increasing the challenges. At the 2011 IBM Information On Demand (IOD) conference, NYSE Euronext described their challenges and the solution they chose. In particular, they highlight Netezza’s tremendous performance and how fast it is to get up-and-running with Netezza.

Not only is it easy to get up-and-running with Netezza, but it is easy to manage your environment on an ongoing basis. You can hear for yourself in this short video segment…

Written by Conor O'Mahony

February 24, 2012 at 12:02 pm

Anatomy of an Oracle Marketing Claim

with 10 comments

Yesterday, Oracle announced a new TPC-C benchmark result. They claim:

In this benchmark, the Sun Fire X4800 M2 server equipped with eight Intel® Xeon® E7-8870 processors and 4TB of Samsung’s Green DDR3 memory, is nearly 3x faster than the best published eight-processor result posted by an IBM p570 server equipped with eight Power 6 processors and running DB2. Moreover, Oracle Database 11g running on the Sun Fire X4800 M2 server is nearly 60 percent faster than the best DB2 result running on IBM’s x86 server.

Let’s have a closer look at this claim, starting with the first part: “nearly 3x faster than the best published eight-processor result posted by an IBM p570 server“. Interestingly, Oracle do not lead by comparing their new leading x86 result with IBM’s leading x86 result. Instead they choose to compare their new result to an IBM result from 2007, exploiting the fact that even though this IBM result was on a different platform, it uses the same number of processors. Of course, we all know that the advances in hardware, storage, networking, and software technology over half a decade are simply too great to form any basis for reasonable comparison. Thankfully, most people will see straight through this shallow attempt by Oracle to make themselves look better than they are. I cannot imagine any reasonable person claiming that Oracle’s x86 solutions offer 3x the performance of IBM’s Power Systems solutions, when comparing today’s technology. I’m sure most people will agree that this first comparison is simply meaningless.

Okay, now let’s look at the second claim: “nearly 60 percent faster than the best DB2 result running on IBM’s x86 server“. Oracle now compare their new leading x86 result with IBM’s leading x86 result. However, if you look at the benchmark details, you will see that IBM’s result uses half the number of CPU processors, CPU cores, and CPU threads. If you look at performance per core, the Oracle result achieves 60,046 tpmC per CPU core, while the IBM result achieves 75,367 tpmC per core. While Oracle claims to be 60% faster, if you take into account relevant system size and determine the performance per core, IBM is actually 25% faster than Oracle.

Finally, let’s not forget the price/performance metric from these benchmark results. This new Oracle result achieved US$.98/tpmC, whereas the leading IBM x86 result achieved US$.59/tpmC. That’s correct, when you determine the cost of processing each transaction for these two benchmark results IBM is 39% less expensive than Oracle. (BTW, I haven’t had a chance yet to determine if Oracle Used their Usual TPC Price/Performance Tactics for this benchmark result, as the result details are not yet available to me; but if they have, the IBM system will prove to be even less expensive again than the Oracle system.)

Benchmark results are as of January 17, 2012: Source: Transaction Processing Performance Council (TPC),
Oracle result: Oracle Sun Fire X4800 M2 server (8 chips/80 cores/160 threads) – 4,803,718 tpmC, US$.98/tpmC, available 06/26/12.
IBM results: IBM System p 570 server (8 chips/16 cores/32 threads) -1,616,162 tpmC, US$3.54 /tpmC, available 11/21/2007. IBM System x3850 X5 (4 chips/40 cores/80 threads) – 3,014,684 tpmC, US$.59/tpmC, available 09/22/11.

Written by Conor O'Mahony

January 18, 2012 at 11:01 am

What will Happen to “In-Memory” when Storage Class Memory Arrives?

leave a comment »

During this week’s keynote address at the International DB2 User Group (IDUG) conference in Prague, Namik Hrle talked about Storage Class Memory. Storage Class Memory is a technology in development that promises the performance of Solid State Drive (SSD) technology at the low cost of Hard Disk Drive (HDD) technology. It also promises compelling breakthroughs in space and power consumption. Storage Class Memory is essentially the marriage of scalable non-volatile memory technology and ultra high-density technology. Here is a table that projects the 2020 characteristics of Storage Class Memory:

Storage Class Memory

This table was actually created in 2008. From what Mr. Hrle says, we are tracking ahead of this schedule and will have these capabilities available sooner than 2020.

The performance limitations of disk-based systems have led to the addition of many database and data warehouse “features” (clever optimizations that address these limitations, and provide acceptable performance). If Storage Class Memory delivers on its random and sequential I/O performance promises, as well as its cost promises, many of these optimizations will become either less important, or perhaps unnecessary. In fact, it makes you wonder if our industry’s current fixation with in-memory capabilities may be short-sighted. Several vendors have in-memory database product visions that will not be realized until the latter half of this decade, which is a similar time frame to the projected availability of low-cost Storage Class Memory. Certainly food for thought…

Written by Conor O'Mahony

November 17, 2011 at 10:17 am

Posted in Cost, Performance

What Happens when you pair Netezza with DB2 for z/OS?

with one comment

The American Association of Retired Persons (AARP) recently paired Netezza with their transactional environment, which includes DB2 for z/OS, and achieved remarkable results. Often when you read customer success stories, you are bombarded with metrics. The AARP success story has those metrics:

  • 1400% improvement in data loading time
  • 1700% improvement in report response time
  • 347% ROI in three years

But metrics tell only part of the story. And sometimes the story gets a lot more interesting when you dig a little deeper.

AARP had been using Oracle Database for their data warehouse. But their system simply could not keep up with the demand. As Margherita Bruni from AARP says, “our analysts would run a report, then go for coffee or for lunch, and, maybe if they were lucky, by 5:00 p.m. they would get the response—it was unacceptable—the system was so busy writing the new daily data that it didn’t give any importance to the read operations performed by users.” The stresses on their system were so great that in 2009 alone, their Oracle Database environment had more than 30 system failures. To compound matters, these system performance issues meant that full backups were not possible. Instead AARP would back up only a few critical tables, which is a less than desirable disaster recovery scenario. Clearly, something had to be done.

AARP chose to move their 36 TB data warehouse to Netezza. You can see from the metrics above that they achieved remarkable performance improvements. But what do those performance improvements mean. Well, for the IT staff, they mean that they are relieved of a huge daily burden. Their old system required one full-time database administrator (DBA) and one half-time SAN network support person. These people are now, for the most part, free to work on other projects. And more importantly, they don’t have to deal with the stress of the old environment any more.

But the benefits are not being enjoyed only by the IT staff. They are also being enjoyed by the business analysts, who according to Bruni “could not believe how quickly results were provided—they were so shocked that their work could be accomplished in a matter of hours rather than weeks that, initially, they thought data was cached.” She goes on to say that “one analyst, who is now a director, told us that he used the extra time for other projects, which ultimately helped him become more successful and receive a promotion.” Now that is what I call a great impact statement. The metrics are great, but when someone is freed up to do work that gets them a promotion, that’s a very tangible illustration of the difference that Netezza can make.

Another illustration of the difference is the impact it had on the group that implemented the Netezza system. As Bruni says “after we moved to IBM Netezza, the word spread that we were doing things right and that leveraging us as an internal service was really smart; we’ve gained new mission-critical areas, such as the social-impact area which supports our Drive to End Hunger and Create the Good campaigns.” It certainly looks like you can add IT management to the list of constituents who have had a positive career impact as a result of moving from Oracle Database to IBM Netezza.

For more information about this story, see AARP: Achieving a 347 percent ROI in three years from BI modernization effort.

Written by Conor O'Mahony

September 27, 2011 at 10:33 am

Benchmark Results for Informix TimeSeries in Meter Data Management

leave a comment »

AMT-SYBEX are a leading provider of platforms for traditional and smart metering. They created a Meterflow Benchmark to help customers choose the best underpinning infrastructure for their platform, and they worked with IBM to run that benchmark with Informix TimeSeries. I previous blogged about Why Informix Rules for Time Series Data Management. Well, the results of this benchmark further illustrate the benefits of Informix TimeSeries. The following quote is from the resulting AMT-SYBEX case study:

We believe that this represents ground breaking levels of performance which is ten times faster than other published benchmarks in this area.

As you can see, Informix is 10x faster than the leading database software they previously worked with. If you read the Executive Summary, you will also see that IBM Informix enjoys almost linear scalability when going from 10 million meters up to 100 million meters, which is a great testament to the efficiency of operation for Informix TimeSeries.

Written by Conor O'Mahony

September 26, 2011 at 1:50 pm

Industry Benchmark Result for DB2 pureScale: SAP Transaction Banking (TRBK) Benchmark

with 7 comments

A couple of years ago, IBM introduced the pureScale feature, which provides application cluster transparency (allowing you to create shared-disk database clusters). At the time, IBM had taken their industry-leading clustering architecture from the mainframe, and brought it to Unix environments. IBM subsequently also brought it to Linux environments.

Today, IBM announced its first public industry benchmark result for this cluster technology. IBM achieved a record result for the SAP Transaction Banking (TRBK) Benchmark, processing more than 56 million posting transactions per hour and more than 22 million balanced accounts per hour. The results were achieved using IBM DB2® 9.7 on SUSE Linux® Enterprise Server. The cluster contained five IBM System x 3690 X5 database servers, and used the IBM System Storage® DS8800 disk system. The servers were configured to take over workload in case of a single system failure, thereby supporting high application availability. For more details, see the official certification from SAP.

Written by Conor O'Mahony

September 12, 2011 at 11:16 am

%d bloggers like this: