Advertisements

Conor O'Mahony's Database Diary

Your source of IBM database software news (DB2, Informix, Hadoop, & more)

Archive for the ‘Cost’ Category

Baltic Bank Moves from Oracle to DB2 to Improve Performance, Lower Costs, and Increase Availability

with one comment

JSC Rietumu Banka is one of the largest banks in the Baltic states. They recently migrated their data from Oracle Database on Sun servers to IBM DB2 on Power Systems servers, and enjoyed the following bebefits:

  • Up to 30 times faster query performance
  • 20-30% reduction in total cost of ownership
  • 200% improvement in data availability

Like many major banks, JSC Rietumu Banka faced recent pressure to reduce IT costs. In particular, they were concerned with total cost of hardware, software, and staffing for their banking applications which used Oracle Database on Sun servers. After a thorough technical and financial evaluation, JSC Rietumu Banka chose to migrate their environment to DB2 on Power Systems servers.

Of course, the ease of migration was a significant factor in JSC Rietumu Banka being able to achieve these benefits. For more information about the “compatibility features” that make it easy to migrate from Oracle Database to IBM DB2, see Gartner: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations.

To learn more about this specific migration, read the full IBM case study.

Advertisements

Written by Conor O'Mahony

March 4, 2012 at 9:05 pm

What will Happen to “In-Memory” when Storage Class Memory Arrives?

leave a comment »

During this week’s keynote address at the International DB2 User Group (IDUG) conference in Prague, Namik Hrle talked about Storage Class Memory. Storage Class Memory is a technology in development that promises the performance of Solid State Drive (SSD) technology at the low cost of Hard Disk Drive (HDD) technology. It also promises compelling breakthroughs in space and power consumption. Storage Class Memory is essentially the marriage of scalable non-volatile memory technology and ultra high-density technology. Here is a table that projects the 2020 characteristics of Storage Class Memory:

Storage Class Memory

This table was actually created in 2008. From what Mr. Hrle says, we are tracking ahead of this schedule and will have these capabilities available sooner than 2020.

The performance limitations of disk-based systems have led to the addition of many database and data warehouse “features” (clever optimizations that address these limitations, and provide acceptable performance). If Storage Class Memory delivers on its random and sequential I/O performance promises, as well as its cost promises, many of these optimizations will become either less important, or perhaps unnecessary. In fact, it makes you wonder if our industry’s current fixation with in-memory capabilities may be short-sighted. Several vendors have in-memory database product visions that will not be realized until the latter half of this decade, which is a similar time frame to the projected availability of low-cost Storage Class Memory. Certainly food for thought…

Written by Conor O'Mahony

November 17, 2011 at 10:17 am

Posted in Cost, Performance

Ray Wang Compares IBM DB2 and Oracle Database for SAP Environments

with 4 comments

Last week, the Americas SAP User Group (ASUG) hosted a Webcast titled Optimize your SAP Environment While Reducing Costs Webcast Materials. The Webcast was delivered by the inimitable Ray Wang, Principal Analyst and CEO at Constellation Research, together with Jack Mason from SAP and Larry Spoerl from IBM. The Webcast discusses how to deliver world-class performance for SAP applications, whilst reducing the Total Cost of Ownership (TCO) of the underpinning infrastructure. It is full of great practical advice, including direct comparisons of IBM DB2 and Oracle Database as part of that underpinning infrastructure. For more information, and for the commentary that goes with the following chart from this Webcast, make sure to check it out at Optimize your SAP Environment While Reducing Costs Webcast Materials:

Compare Oracle Database and IBM DB2 for SAP - ASUG Webcast

Written by Conor O'Mahony

October 4, 2011 at 8:45 am

More Organizations Move up to the Mainframe and DB2 for z/OS

with 2 comments

Any of you who are familiar with DB2 on the mainframe (officially known as DB2 for z/OS) know how efficient it is. The mainframe is not for every organization. However, for those organizations for whom the mainframe is a good fit, the tremendous levels of efficiency, reliability, availability, and security directly translate into significant cost savings.

Database software on the mainframe may be relatively boring when compared with the data management flavor of the day (whether it is Hadoop or any of the other technologies associated with Big Data). But when it comes to storing mission-critical transactions, nothing beats the ruthless efficiency of the mainframe. And that boring, ruthless efficiency has been winning over organizations.

Earlier this year, eWeek reported how the IBM Mainframe Replaces HP, Oracle Systems for Payment Solutions. In this article, eWeek describe how Payment Solution Providers (PSP) from Canada chose DB2 for z/OS over Oracle Database on HP servers. A couple of items in this article really catch the eye. One is that the operational efficiencies of the mainframe are expected to lower IT costs up to 35 percent for PSP. The other is that PSP’s system can now process up to 5,000 transactions per second.

Another organization who moved in the same direction is BC Card—Korea’s largest credit card company. The Register ran a story about how a Korean bank dumps Unix boxes for mainframes. BC Card is a coalition of 11 South Korean banks that handles credit card transactions for 2.62 million merchants and 40 million credit card holders in the country. They dumped their HP and Oracle Sun servers in favor of an IBM mainframe. In an accompanying IBM press release, it was revealed that IBM scored highest in every benchmark test category from performance to security to flexibility. Another significant factor in moving to the mainframe is the combination of the utility pricing that lets customers activate and deactivate mainframe engines on-demand, together with software pricing that scales up and down with capacity.

Despite continual predictions to its demise, it has been reported that the mainframe has experienced one of its best years ever, with an increase in usage (well, technically MIPS) of 86% from the same time in 2010. Much of this growth is coming from new customers to the mainframe. In fact, since the System z196 mainframe started shipping in the third quarter of 2010, IBM has added 68 new mainframe customers, with more than two-thirds of them consolidating from distributed systems.

It may not be as exciting as the newest technology on the block, but it is difficult to beat the reliability and efficiency of the mainframe. Especially when you are faced with the realities of managing a relatively large environment, and all of the costs associated with doing so. And don’t forget, the mainframe can provide you with a hierarchical store, a relational store, or a native XML store. And when you combine the security advantages and the 24×7 availability, together with cost efficiency, it makes for an interesting proposition.

Written by Conor O'Mahony

September 19, 2011 at 8:30 am

IBM Smart Analytics System vs. Oracle Exadata for Data Warehouse Environments

leave a comment »

Here is a video where Philip Howard, Research Director at Bloor Research, evaluates performance, scalability, administration, and cost considerations for IBM Smart Analytics System and Oracle Exadata [for data warehouse environments]. This video is packed with great practical advice for evaluating these products.

Written by Conor O'Mahony

August 30, 2011 at 8:30 am

Oracle Exadata vs. IBM pureScale Application System for OLTP Environments

leave a comment »

Philip Howard, Research Director at Bloor Research, recently evaluated the performance, scalability, administration, and cost considerations for the leading integrated systems from IBM and Oracle for OnLine Transaction Processing (OLTP) environments. Here is a summary of his conclusions:

Bloor Research compare Oracle Exadata and IBM Smart Analytics System for OLTP

And here is a video with his evaluation. It is packed with practical advice regarding storage capacity, processing capacity, and more.

Written by Conor O'Mahony

August 29, 2011 at 8:30 am

Forrester’s Noel Yuhanna on “New Approaches for Database Cost Savings”

leave a comment »

Noel Yuhanna is one of the more prominent names in the database software industry. He is the principal analyst covering database software at Forrester. Here’s a 12-minute video where Noel describes his view on the most commonly used strategies for lowering your database-related costs. Topics include virtualized infrastructure, database compatibility layers, database-as-a-service, database compression, database sub-setting, and administration automation. This video is packed with interesting information. I hope you enjoy!

Written by Conor O'Mahony

August 26, 2011 at 3:39 pm

Posted in Cost, DBA, Video

%d bloggers like this: