Wednesday, August 26, 2009


Create shortcut for removing pendrive:

Rather than clicking on the task bar and then safely removing the pendrive you can eaily do so creating a shortcut:
1)Create shortcut on your desktop
by right click and selecting new shortcut from the menu.
2)Type "RunDll32.exe shell32.dll,Control_RunDLL hotplug.dll" click next and then finish.
shortcut is created and now you can assign a shortcut key to it.
just right click on it and select properties-> click on shortcut tab -> assign a shortcut key to it(it will take clrt + alt by default)eg: if you press 1 shortcut will be ctrl+alt+1


How to copy cd in linux: You can easily copy cd in Linux through command line. just follow the steps:
1)At the root go to the media directory where all the removable media file system (such as cd-rom, floppy disk, usb drive) are mounted.

1) Type cd /media (to enter the media directory).
2) Type ls -l(This will give you long listing of all media mounted. Remember all spaces and case of letter because Linux is case sensitive.)
3) Just type cd hello (if hello is the name of your media).
OR you can use cd * if only one media is there.
4) Type cp * (This will copy all the files of cd in the destination directory.)
Eg: cd * /var/ftp/pub

10 Information Technology Projects for 2010

Maybe your company is different, but most companies I deal with don't start getting serious about creating their IT budgets for next year until after Labor Day. While the 2009 budgets were thrown out the window by many IT execs as the economy spiraled downward, 2010 looks more promising. So if you are one of those executives looking for a return to stability and maybe even some cautious growth, what are 10 IT projects that can help you achieve those goals? These are growth projects and not constant standbys of security, storage and hardware maintenance and upgrades. The old standbys have to be fed, but they won't fuel a growth rebound.

1. Mobility: .It's time to think about application development with the mobile device as the primary client. Your top executives and your sales force are using their mobiles as their primary way of staying in touch with the company. Your customers are more likely to respond to offers made via mobile messages. Rather than thinking about how to mobilize older enterprise applications, think about mobility as the start of the project.

2. Social networks.: This is one you are going to hear a lot about. What you are not going to hear a lot about is how to build the reporting tools required for a successful social network program. Rather than send everyone off Twittering and running Facebook pages, start with what you are trying to accomplish first.

3. Enlarge your company's product development team: Remember motto, "The customer is always right"? Technology firms such as Dell have had success at bringing their customer base into the product development process. Do you make it easy for customers to recommend new products and improvements to existing products and services? You should.

4. Get with the cloud and virtualization:One of the problems with the ways the technology firms have marketed cloud computing and virtualization is the pitch that it is mostly about cutting costs. However, cloud computing also offers a way for companies to quickly provision technology infrastructure for startups within their own company.

5. Think outside your technology box: Yes, it is easier to manage your technology resources when everyone uses the same laptop, the same operating system, the same database, etc. But if your company is going to take advantage of new business intelligence tools hosted in the cloud or new applications from companies such as or new e-commerce tools from companies like Amazon, you have to start small. You have to find and fund the technology pioneers in your company or you are going to be stuck in the same tech rut as last year.

6. Be a leader: How are you developing the new tech talent in your company? While your travel budget may have been clobbered by the economic recession, you have many opportunities in virtual trade shows, e-seminars and smaller local events. Do you track the e-seminars your employees attend? Do you have a way to evaluate which e-sems and virtual trade shows offer the most value? Do you recognize employees who have gone out of their way to learn a new technology and bring it to your company's talent pool? You should.

7. Think outside your company's business box: How much time do you spend looking at how the competitors to your company are using technology? What is their Web interface? How easy or difficult is it to order a product from the competitor? Sign up for their newsletters, mobile alerts and e-seminars where they may be a presenter. Now, take a few moments to step away from your industry and see how technology is being used by startups. Are they making use of geolocation services? What are their offerings like on mobile devices such as the iPhone App Store? This is not just Web surfing—this is called competitive analysis, and if you are structured about it, you can find some good ideas for your company.

8. Understand the new online contractor services:Web firms like ELance are changing the way contractors are hired. If there is an upside to a strained economy, it is there are lots of good contractors suddenly available. You really need to understand how these new Web-based contractor services work if you are going to figure out how to get the programming and application development resources for your new projects.

9. Rethink your company's IT infrastructure: I know this article is about 10 projects for new services, but if you are like most companies, the majority of your IT budget still goes into keeping the lights on and the servers running. Reducing those costs is where you free up new development dollars. The new part is you have a wider range of hosted services to look at than even a year earlier.

10. Be structured about looking at the new offerings from old vendors. Soon you will be asked about Windows 7 and new hosted application offerings from Oracle, new video services from Cisco and new business applications from Google. The big vendors have not been sleeping, but have been waiting for some economic sunlight before making their product marketing pushes. How many of these big platform switches can you make in a year? How rigid is the ROI that you can attach to these offerings? The execs from these companies are playing golf with your boss, and you need to have a reason why or why not you are ready to take on big projects that will consume most of your new project dollars and people resources.

Thursday, August 13, 2009

SAN Management in the Virtual Era

It's probably a safe bet to assume that even though many of you have a fair amount of experience managing virtual environments by now, you're still struggling to keep data running smoothly over your SAN.

SAN management, of course, was no picnic back in the days of physical servers, but it's become a real monster here in the virtual universe. In an age where just about anyone can provision a new server for whatever reason, ensuring that data loads running to and from storage don't jam up the entire network is twice the hassle it used to be.

According to InfoStor's Dave Simpson, anywhere from 70 percent to 90 percent of all performance problems in virtual environments are tied to the SAN. The more virtual servers you have, the less effective traditional tools like device-monitoring modules and standard management stacks become. To really get a handle on the problem, you need to start looking for systems that extend visibility deep into the virtual environment, ones that not only correct problems that arise, but provide configuration and performance-optimization tools to prevent tie-ups from happening in the first place.

Naturally, the major virtual platform providers hold all the cards when it comes to the kinds of SAN-management systems they will support. Fortunately, most are eager to link up with third-party providers as a means to extend functionality over the widest possible user set.

VMware, for instance, offers its Certification Program for the ESX Server 3.5, which recently gave the seal of approval to Enhance Technology's UltraStor RS8 IP and RS16 IP iSCSI arrays. The system rides on the Intel IOP platform, offering quad-channel GbE iSCSI ports for throughput up to 400 MBps and a total capacity of 120 TB.

Oracle VM users, meanwhile, will soon have access to a substantial SAN management upgrade with the Fujitsu FlexFrame for Oracle platform. FlexFrame is essentially a pre-integrated IT infrastructure used for dynamic reassignment of server resources, providing streamlined installation and management, as well as QoS control and automated failover. In the next few months, Fujitsu plans to add a Virtual IO Manager (VIOM) module to the system that simplifies LAN and SAN environments through automated reallocation techniques like virtualized physical network addresses.

Virtual SAN management is more about adding goodies to the management stack, of course. Baseline's David Strom highlights some of the practical approaches that IT executives have identified as crucial to effective virtual management. Chief among them is assessing how much storage you really have to play with after SAN overhead and RAID grouping has been taken into account. You'll also need to determine how disaster recovery and other functions will alter SAN configurations. Cross-team training and collaboration is also crucial to ensure separate work groups have the best interests of the entire network in mind.

SAN management is a lot like cleaning house. No one really appreciates all the areas that are clean, only the ones that are still dirty. No matter how effective or up-to-date your management architecture and policies are, there is always room for improvement.

Fortunately, as virtualization becomes the norm in the data center, the primary management challenges are being addressed. The question remains, though, whether management systems have the capability to scale as much as the virtual environment does, or are we heading toward a brick wall?

A Push for VDI in 2010?

The momentum for desktop virtualization seems to be building, with many analysts expecting a major push by the vendor community in 2010.

The timing is certainly right for those who have vested a lot of development time and energy in virtualization platforms – that would be VMware, Microsoft and Citrix. Now that server virtualization has a firm hold on the enterprise and advanced cloud architectures are still largely in the formative stage, desktop virtualization infrastructure (VDI) is a nice filler to keep the production lines rolling.

However, there are still many who are raising caution flags, not that the technology is not viable, but that the expectations for VDI should not be quite the same as server or storage virtualization.

VDI was front and center at the recent Catalyst Conference put on by the Burton Group in San Diego. According to Citrix’ Sumit Dhawan, the topic drew the most interest from attendees and analysts alike, with many hoping to get out from under increasingly burdensome hardware cycles and tap into more flexible management and upgrade programs. He reports that while earlier VDI architectures were cumbersome, newer generations allow for things like pool management using single OS and app instances, which helps cut down on storage requirements.

VMware, naturally, is keen on moving VDI into the mainstream, with expectations that this month’s VMworld in San Francisco will be used to set the stage for a major campaign next year. Expect to hear about a new edition of the VMware View platform, plus plans for virtualization clients for smartphones and other mobile devices.

It’s no surprise that the vendor community wants to put the best face on VDI, and there is every reason to believe the technology will provide a workable solution for many enterprises. But implementation will no doubt come with a unique set of challenges, and it’s incumbent on early adopters to make sure they have a clear understanding of what VDI can and cannot do.

Adam Oliver, systems engineer at triCerat Inc., which specializes in print operations in complex environments, says VDI infrastructures can be less effective if managers fail to address issues like individual user setting and machine management, printer accessibility, security and overall system monitoring. Far from eliminating problems with physical machines, VDI will most like swap one set of challenges for another.

There are a number of persistent myths about VDI that need to be overcome if it is to find its way into the enterprise on a broad scale, according to Forrester’s Natalie Lambert. Chief among them are the beliefs that a single platform will meet all your needs and that all of your ideal solutions can be legally implemented. Licensing fees and restrictions will quickly blow those expectations out of the water, but, if you’re not careful, only after the platform has been deployed.

VDI has been on the cusp for so long now that it’s easy to dismiss this latest push as simply another attempt at pushing a technology that has so far failed to make a case for itself. The problem with that theory is that by any measure, the traditional desktop infrastructure at most organizations is a major cost center, both in capex and opex.

If VDI can be shown to reduce those costs as dramatically as backers claim, it may prove too good to resist.

WAN Acceleration on a Global Scale

If it's true that the cloud is basically virtualization over the wide area, than it's no surprise to see many of the top platform providers jumping on the WAN optimization bandwagon, particularly when it comes to enhancing disaster recovery and continuity services.

Inside the data center, the increased use of virtual machines is driving all manner of network upgrades -- from 10 GbE and virtual I/O to network convergence and advanced optical technologies. But once you leave those confines, you're at the mercy of carrier networks where bandwidth is shared by millions if not billions of users.

That's why the focus on the WAN has been on reducing data loads while still maintaining the performance levels that users enjoy on their local networks -- a feat that, by many accounts, the leading optimization providers are awfully close to achieving.

That's part of the reason why cloud providers like NewServers are so keenly interested in WAN technologies. The company recently tapped Silver Peak Systems to provide acceleration for its Hardware-as-a-Service offering. NewServers provides what it called "bare metal devices" to large enterprises over the cloud, which means it needs a way to speed up the performance of network devices and applications dealing with high sustained data volumes. The company chose the Silver Peak NX accelerators because of their ability to overcome the limitations of TCP networks, allowing enterprise applications to scale over the cloud without having to rewrite any code.

Hitachi Data Systems had the same idea in mind for its remote disaster recovery and data replication systems, although the company went with the Riverbed Steelhead appliance. The units have been qualified for the TrueCopy Remote Replication and Universal Replicator systems, a combination that both companies say not only cuts back on the need for additional bandwidth, storage and servers, but extends greater application visibility across the entire network. And naturally, it greatly enhances recovery speeds by limiting the amount of data traversing the WAN.

And at a time when relations between IBM and Cisco appear strained due to Cisco's entrance into the server market, they are still willing partners when it comes to WAN services. The companies have brought Cisco's XRC acceleration software to the z/OS customers using the Global Mirror business continuity service. The combo extends WAN performance up to 200 km links, and adds a number of parallel processing capabilities to Cisco customers, such as support for multiple system data movers (SDMs) and parallel access volumes (PAVs). It also supports Fibre Connection (FICON) Data Access Storage Devices (DASDs) from IBM, EMC and Hitachi.

The business productivity community it also looking to enhance its WAN capabilities as it extends products over the cloud. SAP's recent acquisition of NetWeaver is a case in point, with new NetWeaver product manager Jana Richter telling SearchEnterpriseWAN recently that the company is already targeting customers interested in extending applications halfway around the world. The cloud, in fact, will put current centralization efforts to shame, placing a tremendous burden on the WAN to maintain local-area performance levels.

There are those who predict that the future of the local data center is in jeopardy -- that before too long, all IT resources will be contracted out on a utility basis, much the same way most businesses outsource energy production rather than manage and maintain their own power plants. While that vision may be a little extreme, the fact remains that WAN acceleration is one of the key growth areas for the enterprise even as the rest of the economy emerges from recession.

Cooling a More Mature Green Data Center

The concept of "going green" in the data center may have jumped the shark, as the saying goes, but only in the sense that adopting energy-efficient technologies and practices is no longer about catch phrases and empty promises.

Instead, it appears that the movement is transcending the novelty phase and has settled in as a permanent component of the upgrade and expansion process. That means you're less likely to see green technologies put in place for their PR factor as much as for their impact on the bottom line.

Nowhere is this more apparent than in cooling systems. Where once the standard practice was to build out massive cooling structures regardless of their consumption, the newest designs are all about keeping systems cool without busting the budget.

For example, Google recently powered up a chillerless data center in Belgium, foregoing the massive refrigerators that typically inhabit large facilities in favor of naturally cool water from a nearby industrial canal. On the few days of the year when it gets too hot, the company plans to shut down systems and shift loads to other centers.

Air-cooling systems are also getting a makeover. APC just launched a series of row-based cooling systems that not only use less power than traditional designs, but are more scalable and can be installed in greater densities. The system focuses cool air directly onto rows of servers, allowing particular rows to receive additional cooling if they are running high-density applications. A single 600 mm row provides up to 7 kW of cooling capacity and comes equipped with monitoring and automation features to dynamically adjust capacity to maintain constant temperatures at the server inlets.

Another movement afoot in energy-efficient circles is coordination with power and cooling experts at the systems integration stage. Sun Microsystems recently teamed up with Emerson Network Power to offer customized solutions for individual facilities. Emerson maintains power and cooling specialist teams throughout the world capable of devising plans and initiating specific products and services designed to improve data center efficiency. One of their first customers is Sandia National Laboratories, which recently received a new series of Sun Blade X6257 modules and the Sun Cooling Door system tied to Emerson's Liebert XD cooling platform.

When it comes to keeping things cool, the twin dangers are doing too little and doing too much, according to Amazon engineer James Hamilton. On the too little side, he lists failing to seal air flowing into and out of the rack, while the too-much crowd includes enterprises that tend to over-cool their rooms. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) recommends 81 degrees F for today's servers, with an allowable range that extends to 90 degrees. If you look hard enough, you might even find some equipment that's rated for over 100 degrees.

Despite the advances that have taken place in cooling systems of late, the fact is that energy efficiency remains the second most important factor in any redesign. The primary consideration should be reliability. All the energy savings in the world won't amount to a hill of beans if the system fails outright or fails to maintain a proper working temperature.

The good news is that these two requirements are not at odds anymore. Greater efficiency is working hand-in-hand with greater reliability, which means you still maintain the same productivity you had before -- you just pay less for it over time.

Tool Gives Tips for Selling Management on Data Warehousing

IT Business Edge contributor Michael Stevens has uploaded a checklist that helps users put together a plan for selling upper-level management on data warehousing. The typical ROI-based business case will not work well for selling a data warehousing initiative to upper management. While there may be some long-term savings that can be quantified, they will almost certainly be outweighed by costs, Stevens says.

The real benefits of data warehousing are indirect: the ability of your company to make better, faster decisions resulting in cost savings or increased revenue. For example, a data warehouse can help a manufacturer identify poorly performing suppliers or uncover sales patterns that could be exploited to boost the top line.

Here are a few of the tips from the checklist:

Multi-Source Reports. Data warehouses can be set up to accommodate data from multiple sources to provide a clear picture of relationships that would otherwise be hard to track, e.g. the relationship of logistics decisions to sales or training costs to productivity.

Reliable Data. Data warehousing initiatives typically include a data cleansing process that eliminates, for example multiple names for the same individual or part. As a result, the data that decisions are based upon is more accurate.

Drill-down. A data warehouse can present data at any level of detail – from global, to regional, to site-by-site. It can therefore serve the needs and management styles of all managers – those who are only interested in the big picture and those who want to examine highly granular data.

Data Synchronization

Data synchronization is the process of ensuring that data in two or more applications or systems is identical, by automatically copying any changes back and forth. Data synchronization projects are often complicated by inconsistent data quality, large numbers of disparate data formats and security concerns. Systems also often have differing data integration needs. Some require batch processing of bulk data while others require real-time or near real-time data updates,

Stimulus Data Reporting Requirements Set

The U.S. Office of Management and Budget has released a hefty document outlining the information that state and local governments must report about use of money from the federal stimulus package, reports Government Technology.

It's called "Recipient Reporting Data Model v3.0" and can be downloaded from the site

A 22-page document details nearly 100 pieces of data required from recipients of the federal funds by Oct. 10.

IT Business Edge's Ann All has written that the level of transparency required in the stimulus spending is a boon for business intelligence vendors.

Meanwhile, a article on Government Technology gives mixed reviews to state Web sites tracking the stimulus spending. It refers to a report by Good Jobs First that ranked Maryland as the best in explaining how the federal money was being spent, and Illinois as dead last of the 50 states.

California Launches Online Data Repository

In an effort to make state data easier to find, California has launched a new online Data repository, according to Government Technology. The site contains information on population, education, imports and exports, traffic, and travel and tourism, among other things.

Says state CIO Teri Takai,

"This new centralized data repository allows the public to find, use and repackage the volumes of data generated by the state, which were previously hard to find in various places throughout government. By publishing in different formats, we are empowering the public to use government data in creative ways to help improve our great state."

Last week, the National Association of State Chief Information Officers said it would work with the Office of Management and Budget and the General Services Administration toencourage states to build their own online "data catalogs"


Hardware is broad term that encompasses virtually all computing devices, from mainframes to laptops, as well as network switches and access points. Managing hardware consumes much of IT's resources, and proving the business value of new hardware often is a challenge, since it tends not to radically alter business processes. Managed infrastructure options, including paying for hardware access as a service, are emerging, but IT still has to manage the boxes.

Access Points, sometimes calls APs or transceivers, are wireless broadcasting stations that connect devices to each other and, typically, a wired network and the Internet. APs have a range of coverage for wireless signals. Even small businesses often need to employ more than one access point to provide complete coverage of their space. Security is a constant concern with APs, which are configured by manufacturers for open access. Many individuals and even businesses are lax about implementing even basic AP security.

The Great Data Warehouse Debate

In the past, pre-built analytic applications have nearly all have failed to deliver, as vendors have often left data warehousing out of the application mix. Building a true data warehouse against a complex operational system takes time and money. Oracle has made the necessary data warehousing investment to create a business intelligence tool that can deliver faster deployment, a lower TCO, and an assured business value. This white paper explores the relative pros and cons of deploying pre-built analytic applications from Oracle versus building a custom data warehouse against Oracle E-Business Suite, PeopleSoft, Siebel or similar systems.