Do you have rigid rules around eating and watch every morsel? If so, you're most prone to falling ill.
Friday, September 11, 2009
The eating 'right trap'
Do you have rigid rules around eating and watch every morsel? If so, you're most prone to falling ill.
Wednesday, August 26, 2009
Usb
Linux
10 Information Technology Projects for 2010
1. Mobility: .It's time to think about application development with the mobile device as the primary client. Your top executives and your sales force are using their mobiles as their primary way of staying in touch with the company. Your customers are more likely to respond to offers made via mobile messages. Rather than thinking about how to mobilize older enterprise applications, think about mobility as the start of the project.
8. Understand the new online contractor services:Web firms like ELance are changing the way contractors are hired. If there is an upside to a strained economy, it is there are lots of good contractors suddenly available. You really need to understand how these new Web-based contractor services work if you are going to figure out how to get the programming and application development resources for your new projects.
Thursday, August 13, 2009
SAN Management in the Virtual Era
It's probably a safe bet to assume that even though many of you have a fair amount of experience managing virtual environments by now, you're still struggling to keep data running smoothly over your SAN.
SAN management, of course, was no picnic back in the days of physical servers, but it's become a real monster here in the virtual universe. In an age where just about anyone can provision a new server for whatever reason, ensuring that data loads running to and from storage don't jam up the entire network is twice the hassle it used to be.
According to InfoStor's Dave Simpson, anywhere from 70 percent to 90 percent of all performance problems in virtual environments are tied to the SAN. The more virtual servers you have, the less effective traditional tools like device-monitoring modules and standard management stacks become. To really get a handle on the problem, you need to start looking for systems that extend visibility deep into the virtual environment, ones that not only correct problems that arise, but provide configuration and performance-optimization tools to prevent tie-ups from happening in the first place.
Naturally, the major virtual platform providers hold all the cards when it comes to the kinds of SAN-management systems they will support. Fortunately, most are eager to link up with third-party providers as a means to extend functionality over the widest possible user set.
VMware, for instance, offers its Certification Program for the ESX Server 3.5, which recently gave the seal of approval to Enhance Technology's UltraStor RS8 IP and RS16 IP iSCSI arrays. The system rides on the Intel IOP platform, offering quad-channel GbE iSCSI ports for throughput up to 400 MBps and a total capacity of 120 TB.
Oracle VM users, meanwhile, will soon have access to a substantial SAN management upgrade with the Fujitsu FlexFrame for Oracle platform. FlexFrame is essentially a pre-integrated IT infrastructure used for dynamic reassignment of server resources, providing streamlined installation and management, as well as QoS control and automated failover. In the next few months, Fujitsu plans to add a Virtual IO Manager (VIOM) module to the system that simplifies LAN and SAN environments through automated reallocation techniques like virtualized physical network addresses.
Virtual SAN management is more about adding goodies to the management stack, of course. Baseline's David Strom highlights some of the practical approaches that IT executives have identified as crucial to effective virtual management. Chief among them is assessing how much storage you really have to play with after SAN overhead and RAID grouping has been taken into account. You'll also need to determine how disaster recovery and other functions will alter SAN configurations. Cross-team training and collaboration is also crucial to ensure separate work groups have the best interests of the entire network in mind.
SAN management is a lot like cleaning house. No one really appreciates all the areas that are clean, only the ones that are still dirty. No matter how effective or up-to-date your management architecture and policies are, there is always room for improvement.
A Push for VDI in 2010?
The momentum for desktop virtualization seems to be building, with many analysts expecting a major push by the vendor community in 2010.
The timing is certainly right for those who have vested a lot of development time and energy in virtualization platforms – that would be VMware, Microsoft and Citrix. Now that server virtualization has a firm hold on the enterprise and advanced cloud architectures are still largely in the formative stage, desktop virtualization infrastructure (VDI) is a nice filler to keep the production lines rolling.
However, there are still many who are raising caution flags, not that the technology is not viable, but that the expectations for VDI should not be quite the same as server or storage virtualization.
VDI was front and center at the recent Catalyst Conference put on by the Burton Group in San Diego. According to Citrix’ Sumit Dhawan, the topic drew the most interest from attendees and analysts alike, with many hoping to get out from under increasingly burdensome hardware cycles and tap into more flexible management and upgrade programs. He reports that while earlier VDI architectures were cumbersome, newer generations allow for things like pool management using single OS and app instances, which helps cut down on storage requirements.
VMware, naturally, is keen on moving VDI into the mainstream, with expectations that this month’s VMworld in San Francisco will be used to set the stage for a major campaign next year. Expect to hear about a new edition of the VMware View platform, plus plans for virtualization clients for smartphones and other mobile devices.
It’s no surprise that the vendor community wants to put the best face on VDI, and there is every reason to believe the technology will provide a workable solution for many enterprises. But implementation will no doubt come with a unique set of challenges, and it’s incumbent on early adopters to make sure they have a clear understanding of what VDI can and cannot do.
Adam Oliver, systems engineer at triCerat Inc., which specializes in print operations in complex environments, says VDI infrastructures can be less effective if managers fail to address issues like individual user setting and machine management, printer accessibility, security and overall system monitoring. Far from eliminating problems with physical machines, VDI will most like swap one set of challenges for another.
There are a number of persistent myths about VDI that need to be overcome if it is to find its way into the enterprise on a broad scale, according to Forrester’s Natalie Lambert. Chief among them are the beliefs that a single platform will meet all your needs and that all of your ideal solutions can be legally implemented. Licensing fees and restrictions will quickly blow those expectations out of the water, but, if you’re not careful, only after the platform has been deployed.
VDI has been on the cusp for so long now that it’s easy to dismiss this latest push as simply another attempt at pushing a technology that has so far failed to make a case for itself. The problem with that theory is that by any measure, the traditional desktop infrastructure at most organizations is a major cost center, both in capex and opex.
WAN Acceleration on a Global Scale
If it's true that the cloud is basically virtualization over the wide area, than it's no surprise to see many of the top platform providers jumping on the WAN optimization bandwagon, particularly when it comes to enhancing disaster recovery and continuity services.
Inside the data center, the increased use of virtual machines is driving all manner of network upgrades -- from 10 GbE and virtual I/O to network convergence and advanced optical technologies. But once you leave those confines, you're at the mercy of carrier networks where bandwidth is shared by millions if not billions of users.
That's why the focus on the WAN has been on reducing data loads while still maintaining the performance levels that users enjoy on their local networks -- a feat that, by many accounts, the leading optimization providers are awfully close to achieving.
That's part of the reason why cloud providers like NewServers are so keenly interested in WAN technologies. The company recently tapped Silver Peak Systems to provide acceleration for its Hardware-as-a-Service offering. NewServers provides what it called "bare metal devices" to large enterprises over the cloud, which means it needs a way to speed up the performance of network devices and applications dealing with high sustained data volumes. The company chose the Silver Peak NX accelerators because of their ability to overcome the limitations of TCP networks, allowing enterprise applications to scale over the cloud without having to rewrite any code.
Hitachi Data Systems had the same idea in mind for its remote disaster recovery and data replication systems, although the company went with the Riverbed Steelhead appliance. The units have been qualified for the TrueCopy Remote Replication and Universal Replicator systems, a combination that both companies say not only cuts back on the need for additional bandwidth, storage and servers, but extends greater application visibility across the entire network. And naturally, it greatly enhances recovery speeds by limiting the amount of data traversing the WAN.
And at a time when relations between IBM and Cisco appear strained due to Cisco's entrance into the server market, they are still willing partners when it comes to WAN services. The companies have brought Cisco's XRC acceleration software to the z/OS customers using the Global Mirror business continuity service. The combo extends WAN performance up to 200 km links, and adds a number of parallel processing capabilities to Cisco customers, such as support for multiple system data movers (SDMs) and parallel access volumes (PAVs). It also supports Fibre Connection (FICON) Data Access Storage Devices (DASDs) from IBM, EMC and Hitachi.
The business productivity community it also looking to enhance its WAN capabilities as it extends products over the cloud. SAP's recent acquisition of NetWeaver is a case in point, with new NetWeaver product manager Jana Richter telling SearchEnterpriseWAN recently that the company is already targeting customers interested in extending applications halfway around the world. The cloud, in fact, will put current centralization efforts to shame, placing a tremendous burden on the WAN to maintain local-area performance levels.
Cooling a More Mature Green Data Center
The concept of "going green" in the data center may have jumped the shark, as the saying goes, but only in the sense that adopting energy-efficient technologies and practices is no longer about catch phrases and empty promises.
Instead, it appears that the movement is transcending the novelty phase and has settled in as a permanent component of the upgrade and expansion process. That means you're less likely to see green technologies put in place for their PR factor as much as for their impact on the bottom line.
Nowhere is this more apparent than in cooling systems. Where once the standard practice was to build out massive cooling structures regardless of their consumption, the newest designs are all about keeping systems cool without busting the budget.
For example, Google recently powered up a chillerless data center in Belgium, foregoing the massive refrigerators that typically inhabit large facilities in favor of naturally cool water from a nearby industrial canal. On the few days of the year when it gets too hot, the company plans to shut down systems and shift loads to other centers.
Air-cooling systems are also getting a makeover. APC just launched a series of row-based cooling systems that not only use less power than traditional designs, but are more scalable and can be installed in greater densities. The system focuses cool air directly onto rows of servers, allowing particular rows to receive additional cooling if they are running high-density applications. A single 600 mm row provides up to 7 kW of cooling capacity and comes equipped with monitoring and automation features to dynamically adjust capacity to maintain constant temperatures at the server inlets.
Another movement afoot in energy-efficient circles is coordination with power and cooling experts at the systems integration stage. Sun Microsystems recently teamed up with Emerson Network Power to offer customized solutions for individual facilities. Emerson maintains power and cooling specialist teams throughout the world capable of devising plans and initiating specific products and services designed to improve data center efficiency. One of their first customers is Sandia National Laboratories, which recently received a new series of Sun Blade X6257 modules and the Sun Cooling Door system tied to Emerson's Liebert XD cooling platform.
When it comes to keeping things cool, the twin dangers are doing too little and doing too much, according to Amazon engineer James Hamilton. On the too little side, he lists failing to seal air flowing into and out of the rack, while the too-much crowd includes enterprises that tend to over-cool their rooms. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) recommends 81 degrees F for today's servers, with an allowable range that extends to 90 degrees. If you look hard enough, you might even find some equipment that's rated for over 100 degrees.
Despite the advances that have taken place in cooling systems of late, the fact is that energy efficiency remains the second most important factor in any redesign. The primary consideration should be reliability. All the energy savings in the world won't amount to a hill of beans if the system fails outright or fails to maintain a proper working temperature.
Tool Gives Tips for Selling Management on Data Warehousing
IT Business Edge contributor Michael Stevens has uploaded a checklist that helps users put together a plan for selling upper-level management on data warehousing. The typical ROI-based business case will not work well for selling a data warehousing initiative to upper management. While there may be some long-term savings that can be quantified, they will almost certainly be outweighed by costs, Stevens says.
The real benefits of data warehousing are indirect: the ability of your company to make better, faster decisions resulting in cost savings or increased revenue. For example, a data warehouse can help a manufacturer identify poorly performing suppliers or uncover sales patterns that could be exploited to boost the top line.
Here are a few of the tips from the checklist:
Multi-Source Reports. Data warehouses can be set up to accommodate data from multiple sources to provide a clear picture of relationships that would otherwise be hard to track, e.g. the relationship of logistics decisions to sales or training costs to productivity.
Reliable Data. Data warehousing initiatives typically include a data cleansing process that eliminates, for example multiple names for the same individual or part. As a result, the data that decisions are based upon is more accurate.
Drill-down. A data warehouse can present data at any level of detail – from global, to regional, to site-by-site. It can therefore serve the needs and management styles of all managers – those who are only interested in the big picture and those who want to examine highly granular data.
Data Synchronization
Stimulus Data Reporting Requirements Set
The U.S. Office of Management and Budget has released a hefty document outlining the information that state and local governments must report about use of money from the federal stimulus package, reports Government Technology.
It's called "Recipient Reporting Data Model v3.0" and can be downloaded from the site recovery.gov.
A 22-page document details nearly 100 pieces of data required from recipients of the federal funds by Oct. 10.
IT Business Edge's Ann All has written that the level of transparency required in the stimulus spending is a boon for business intelligence vendors.
Meanwhile, a Stateline.org article on Government Technology gives mixed reviews to state Web sites tracking the stimulus spending. It refers to a report by Good Jobs First that ranked Maryland as the best in explaining how the federal money was being spent, and Illinois as dead last of the 50 states.
California Launches Online Data Repository
In an effort to make state data easier to find, California has launched a new online Data repository, according to Government Technology. The site contains information on population, education, imports and exports, traffic, and travel and tourism, among other things.
Says state CIO Teri Takai,
"This new centralized data repository allows the public to find, use and repackage the volumes of data generated by the state, which were previously hard to find in various places throughout government. By publishing in different formats, we are empowering the public to use government data in creative ways to help improve our great state."
Last week, the National Association of State Chief Information Officers said it would work with the Office of Management and Budget and the General Services Administration toencourage states to build their own online "data catalogs"