Advertisements

Conor O'Mahony's Database Diary

Your source of IBM database software news (DB2, Informix, Hadoop, & more)

Archive for the ‘IBM System z’ Category

Comparing “New Big Data” with IMS on the Mainframe

with 2 comments

While it does not come up often in today’s data management conversations, the IMS database software is at the heart of many major corporations around the world. For many people, it is the undisputed leader for mission-critical, enterprise transaction and data-serving workloads. IMS users routinely handle peaks of 100 million transactions in a day, and there are quite a few users who report more than 3,000 days without unplanned outages. That’s more than 8 years without an unplanned outage!

IBM recently announced IMS 12, claiming peak performance at a remarkable 66,000 transactions per second. The new release features improved performance and CPU efficiency for most IMS use cases, and a significant improvement in performance for certain use cases. For instance, the Fast Path Secondary Index means that workloads that use this secondary index are 60% faster.

It is interesting to compare the performance of IMS with the headline-grabbing “big data” solutions that are all the rage today. For instance, at the end of August this year, we read how Beyonce Pregnancy News Births New Twitter Record Of 8,868 Tweets Per Second. I am not saying that IMS can replace the infrastructure of Twitter. Far from it. However, I am saying that, when you consider that IMS can handle 66,000 transactions per second, the relative performance levels of the “new big data” solutions when compared with IMS are food for thought. Especially when you consider the very significant infrastructure in place at Twitter, and the staff needed to manage that infrastructure. And don’t forget that IMS supports these performance levels with full read-write capability, full data integrity, and mainframe-level security.

I appreciate that many of today’s Web-scale businesses begin with capital investments that preclude the hardware and software investments required for something like IMS. These new businesses need to be relatively agile, and depend upon the low barrier of entry that x86-based systems and open source/inexpensive software afford. However, I still think it interesting to put this “new big data” in perspective.

Advertisements

Written by Conor O'Mahony

November 9, 2011 at 2:17 pm

New IBM Smart Analytics Systems

leave a comment »

Oracle garnered a lot of headlines a couple of weeks ago with their Oracle Database Appliance. It didn’t take long for SmarterQuestions to indicate why the IBM Smart Analytics Systems are A Smarter Database System for SMB Clients.

Recently, IBM added the following systems:

  • IBM Smart Analytics System 5710, which is an x86-based Linux system
  • IBM Smart Analytics System 7710, which is a Power Systems-based UNIX system
  • IBM Smart Analytics System 9710, which are mainframe-based systems

These systems include everything you need to quickly set up a data warehouse environment, and to quickly have your business analysts working with the data.

On top of the servers and storage, it includes database and data warehouse software, Cognos software, cubing services, data mining capabilities, and text analytic capabilities. And it is available on your platform of choice (Linux, UNIX, or mainframe). It is also competitively priced, when you consider that the starting price for the 5710 is under $50k, just like the Oracle appliance. However, the IBM system includes all of the necessary software, whereas with the Oracle appliance you have to purchase the very expensive Oracle Database software separately. And the Oracle Database software is not exactly inexpensive.

If you want to learn more, please visit the IBM Smart Analytics Systems Web page.

Written by Conor O'Mahony

October 13, 2011 at 11:26 am

Most Popular Presentation from IDUG DB2 Tech Conference 2011 in Anaheim

leave a comment »

The International DB2 User Group (IDUG) is presenting a free webcast featuring the most popular presentation from the most recent IDUG DB2 Tech Conference, as voted by attendees at the conference. Suresh Sane will present his A DB2 10 Customer’s Experience presentation, which will describe his experiences with DB2 10 for z/OS, including:

  • How new SQL features help
  • How hash access speeds up queries against large tables
  • How access path determination is now smarter
  • How concurrency is improved without sacrificing integrity
  • How temporal tables simplify code

This webcast is a must-see for Database Administrators and Application Developers. It is filled with rich content, helpful hints and tips. As a special bonus, everyone who registers for the Webcast will receive a complimentary copy of the Business Value of DB2 10 – Smarter Database for a Smarter Planet report by Julian Stuhler, Triton Consulting. The Webcast will take place at 11am ET on Wednesday, 02 November 2011. To register for the Webcast, please go to DB2 10 Application Topics—A DB2 10 Customer’s Experience.

Written by Conor O'Mahony

October 6, 2011 at 12:24 pm

What Happens when you pair Netezza with DB2 for z/OS?

with one comment

The American Association of Retired Persons (AARP) recently paired Netezza with their transactional environment, which includes DB2 for z/OS, and achieved remarkable results. Often when you read customer success stories, you are bombarded with metrics. The AARP success story has those metrics:

  • 1400% improvement in data loading time
  • 1700% improvement in report response time
  • 347% ROI in three years

But metrics tell only part of the story. And sometimes the story gets a lot more interesting when you dig a little deeper.

AARP had been using Oracle Database for their data warehouse. But their system simply could not keep up with the demand. As Margherita Bruni from AARP says, “our analysts would run a report, then go for coffee or for lunch, and, maybe if they were lucky, by 5:00 p.m. they would get the response—it was unacceptable—the system was so busy writing the new daily data that it didn’t give any importance to the read operations performed by users.” The stresses on their system were so great that in 2009 alone, their Oracle Database environment had more than 30 system failures. To compound matters, these system performance issues meant that full backups were not possible. Instead AARP would back up only a few critical tables, which is a less than desirable disaster recovery scenario. Clearly, something had to be done.

AARP chose to move their 36 TB data warehouse to Netezza. You can see from the metrics above that they achieved remarkable performance improvements. But what do those performance improvements mean. Well, for the IT staff, they mean that they are relieved of a huge daily burden. Their old system required one full-time database administrator (DBA) and one half-time SAN network support person. These people are now, for the most part, free to work on other projects. And more importantly, they don’t have to deal with the stress of the old environment any more.

But the benefits are not being enjoyed only by the IT staff. They are also being enjoyed by the business analysts, who according to Bruni “could not believe how quickly results were provided—they were so shocked that their work could be accomplished in a matter of hours rather than weeks that, initially, they thought data was cached.” She goes on to say that “one analyst, who is now a director, told us that he used the extra time for other projects, which ultimately helped him become more successful and receive a promotion.” Now that is what I call a great impact statement. The metrics are great, but when someone is freed up to do work that gets them a promotion, that’s a very tangible illustration of the difference that Netezza can make.

Another illustration of the difference is the impact it had on the group that implemented the Netezza system. As Bruni says “after we moved to IBM Netezza, the word spread that we were doing things right and that leveraging us as an internal service was really smart; we’ve gained new mission-critical areas, such as the social-impact area which supports our Drive to End Hunger and Create the Good campaigns.” It certainly looks like you can add IT management to the list of constituents who have had a positive career impact as a result of moving from Oracle Database to IBM Netezza.

For more information about this story, see AARP: Achieving a 347 percent ROI in three years from BI modernization effort.

Written by Conor O'Mahony

September 27, 2011 at 10:33 am

More Organizations Move up to the Mainframe and DB2 for z/OS

with 2 comments

Any of you who are familiar with DB2 on the mainframe (officially known as DB2 for z/OS) know how efficient it is. The mainframe is not for every organization. However, for those organizations for whom the mainframe is a good fit, the tremendous levels of efficiency, reliability, availability, and security directly translate into significant cost savings.

Database software on the mainframe may be relatively boring when compared with the data management flavor of the day (whether it is Hadoop or any of the other technologies associated with Big Data). But when it comes to storing mission-critical transactions, nothing beats the ruthless efficiency of the mainframe. And that boring, ruthless efficiency has been winning over organizations.

Earlier this year, eWeek reported how the IBM Mainframe Replaces HP, Oracle Systems for Payment Solutions. In this article, eWeek describe how Payment Solution Providers (PSP) from Canada chose DB2 for z/OS over Oracle Database on HP servers. A couple of items in this article really catch the eye. One is that the operational efficiencies of the mainframe are expected to lower IT costs up to 35 percent for PSP. The other is that PSP’s system can now process up to 5,000 transactions per second.

Another organization who moved in the same direction is BC Card—Korea’s largest credit card company. The Register ran a story about how a Korean bank dumps Unix boxes for mainframes. BC Card is a coalition of 11 South Korean banks that handles credit card transactions for 2.62 million merchants and 40 million credit card holders in the country. They dumped their HP and Oracle Sun servers in favor of an IBM mainframe. In an accompanying IBM press release, it was revealed that IBM scored highest in every benchmark test category from performance to security to flexibility. Another significant factor in moving to the mainframe is the combination of the utility pricing that lets customers activate and deactivate mainframe engines on-demand, together with software pricing that scales up and down with capacity.

Despite continual predictions to its demise, it has been reported that the mainframe has experienced one of its best years ever, with an increase in usage (well, technically MIPS) of 86% from the same time in 2010. Much of this growth is coming from new customers to the mainframe. In fact, since the System z196 mainframe started shipping in the third quarter of 2010, IBM has added 68 new mainframe customers, with more than two-thirds of them consolidating from distributed systems.

It may not be as exciting as the newest technology on the block, but it is difficult to beat the reliability and efficiency of the mainframe. Especially when you are faced with the realities of managing a relatively large environment, and all of the costs associated with doing so. And don’t forget, the mainframe can provide you with a hierarchical store, a relational store, or a native XML store. And when you combine the security advantages and the 24×7 availability, together with cost efficiency, it makes for an interesting proposition.

Written by Conor O'Mahony

September 19, 2011 at 8:30 am

%d bloggers like this: