Jan 212015
 

January 20, 2015 -by Tom Spring – CRN

IBM is for the first time ever bringing its hardware and software partners under one roof into a Global Business Partner Group in a move designed to push the $100 billion IT goliath into an age of cloud, big data and analytics.

“IBM is putting in place a more integrated approach to IT solutions, breaking down silos,” said an IBM partner who asked not to be identified.

The internal shakeup is wide-ranging, impacting IBM’s channel strategy and includes a reorg of business units and an internal executive changing of the guard. While IBM declined to comment on the shakeup, channel partners confirmed the moves and said that channel leadership remains intact with Marc Dupaquier, general manager of Global Business Partners at IBM, overseeing IBM’s channel business.

Late Tuesday, IBM will announce its fourth-quarter earnings where it’s expected to report its 11th straight quarter without a revenue increase, according to Wall Street analysts.

Channel partners confirmed reports that IBM has put in place a new internal structure within the company that is focused on a holistic approach to solving business problems centered on analytics, cloud, mobile and security. The move is meant to move away from IBM’s existing business unit approach that favored stand-alone hardware, software and services silos.

Partners said the new groups include Research, Sales and Delivery, Systems, Global Technology Services, Cloud, Watson, Security, Commerce and Analytics. Notable to the channel community is that IBM’s hardware and software channel teams will be rolled into a Global Business Partner Group.

By unifying groups, IBM helps customers piece together solutions that span IBM’s vast product portfolio. “This means centralized management of business units and a more unified strategic direction for IBM instead of having separate IBM horses running their own races,” said the partner.

“It’s a welcome change,” said another IBM partner that asked not to be identified. He said IBM was too often competing with itself on sales.

“We’d bring security deals to IBM and one [IBM] group would want to sell it as a service and another would want to sell it as a traditional software sale,” said the IBM partner that specialized in reselling IBM security solutions.

“Each IBM group was in a silo. They didn’t care if they walked into another deal. They didn’t care about confusing a customer. They wanted to make money,” said the partner. “We lost deals. Customers would have to wait for quotes and they would be confused. Ultimately, that pushed customers to the competition.”

Other tweaks to IBM’s channel include a greater emphasis on IBM’s regional sales reps. Partners told CRN that change will impact national solution providers that do business in multiple geographic areas. In the past, when a Chicago-based solution provider needed IBM channel support in Boston, for example, they would work locally with their Chicago IBM reps. Now partners will be encouraged to work more closely with geographic-specific channel reps instead.

IBM will be focusing more on where products and services are geographically landing and not from where they are launching, partners said.

“There will always be a geographic advantage when a rep has boots on the ground where you want to do business,” said a national IBM channel partner who will be impacted by the changes.

The partner told CRN that the number of channel partners that are receiving dedicated channel managers has been reduced. That means less support for remaining partners when it comes to in-house account management, according to partners. On the flip side, it means more support for those IBM partners selling higher-value solutions.

“IBM is putting more focus on less partners,” said the national partner that will be transitioned to receive less in-house support from IBM account managers. “They are relying a lot more on distributors. It’s unclear what the impact this will have on my business,” he said.

IBM has expanded further existing relationships with a number of distributors including Arrow, Ingram Micro, Tech Data and Avnet. For example, both Tech Data and Avnet began offering IBM’s VersaStack solution in December. Distributors have also begun selling IBM’s SoftLayer cloud services.

The change in coverage model, partners speculate, has to do with massive internal changes it has undergone over the past 12 months.

Last year, IBM divested its x86 server business and its chip manufacturing business. The reorganization dovetails major IBM investments in mobile solutions and analytics via its Watson group. Early last year, IBM announced a “workforce rebalancing” that reportedly led to a staffing reduction of 15,000 employees.

Over the past several years, Chief Executive Ginni Rometty has pushed IBM to realign the company for success in delivering big data, cloud, security and mobile solutions. The IBM restructuring comes as the company struggles to reverse over two years of revenue decline and at the same time realigns itself for changes within IT such as a move by companies to SaaS, PaaS and IaaS.

Part of the reorientation has included internal leadership changes.

Former head of IBM’s Systems and Technology Group, Tom Rosamilia, now becomes senior vice president of IBM’s Systems group. According to Rosamilia’s updated official IBM bio he has “global responsibility for all aspects of IBM’s middleware, servers and storage as well as the Company’s global Business Partners organization.”

Bob LeBlanc, who has been shepherding IBM’s cloud business and delivered a keynote at CRN’s Best of Breed conference in December, is now senior vice president for IBM’s cloud group, according to his IBM bio.

IBM’s Arvind Krishna becomes senior vice president and director of IBM Research. Arvind moves into his new role from his previous position as general manager of IBM Systems and Technology Group’s Development and Manufacturing organization, overseeing IBM semiconductor, server and storage systems research, according to his IBM bio.

Steven Mills becomes executive vice president of IBM Software and Systems. Prior to this position, Mills was senior vice president and group executive for IBM’s Software Group, according to his bio.

Partners sais that while they are waiting for the dust to settle on the shakeup, the moves are positive.

“How you solve business problems today is not how you solved them just a few years ago,” said a business partner that asked not to be identified. “Too often there is a wide gulf between customer expectations versus what IBM can deliver. Now, IBM can streamline messaging, expectations and the solutions to solve specific business problems.”

Oct 062014
 

October 6, 2014 – by Steven Burke – CRN

Hewlett Packard Monday confirmed that it is splitting into effectively what would become two publically traded Fortune 50 companies: a $56 billion PC and Printing business and a $56 billion enterprise computing business.

HP’s personal systems and printing business will do business as HP Inc. and retain the current branding and logo. HP’s enterprise computing business, which will include enterprise systems, software and services, will do business as Hewlett Packard Enterprise.

HP shares were trading up six percent in premarket trading to $37.20 on news of the split.

Following the split, which is expected to be completed by the end of HP’s fiscal year 2015 ended Oct. 31, 2015, HP shareholders will own shares of both HP Inc. and Hewlett Packard Enterprise.

HP said the split will provide each company with its “own more focused equity currency, and investors with the opportunity to invest in two companies with compelling and unique financial profiles suited to their respective businesses.” What’s more, HP said both companies will be “well capitalized and expect to have investment grade credit ratings and capital structures.

The complex transaction, which is intended to be a tax-free distribution to HP’s shareholders for federal income tax purposes, has been approved by HP’s board of directors, but must still receive favorable rulings with respect to the tax free nature of the deal.

HP Chairman and CEO Meg Whitman, who will retain a hand in both companies, said in a prepared statement that the split will accelerate her five year HP turnaround plan which is approaching the fourth year.

“Our work during the past three years has significantly strengthened our core businesses to the point where we can more aggressively go after the opportunities created by a rapidly changing market,” said Whitman. “The decision to separate into two market-leading companies underscores our commitment to the turnaround plan. It will provide each new company with the independence, focus, financial resources, and flexibility they need to adapt quickly to market and customer dynamics, while generating long-term value for shareholders.”

Whitman said that by transitioning from one HP to two new companies HP “will be in an even better position to compete in the market, support our customers and partners, and deliver maximum value to our shareholders.”

Whitman will be president and CEO of Hewlett Packard Enterprise with lead independent director Pat Russo acting as chairman of the Hewlett Packard Enterprise board.
Dion Weisler, the current executive vice president of HP’s personal systems and printing business, will be president and CEO of HP Inc. with Whitman acting as non executive Chairman of the HP Inc. board of directors.

Partners said they the split opening the door to unlocking value and increasing innovation in both companies. “It’s a brilliant move,” said Mike Strohl, CEO of Entisys Solutions, a Concord, Calif.-based HP Platinum partner, No. 253 on the 2014 Solution Provider 500. “The PC printer business is more consumer focused and the enterprise business is a commercial focused business. Separating them will allow HP to drive more innovation, propelling them and their partners into a market leadership position.”

HP said the split will provide HP Enterprisewith “additional resources and a reduction of debt at the operation company level to support investments across key areas of the portfolio.” The company assured that HP Enteprise customers will have the same “unmatched choice of how to deploy and consume technology with a simpler, more nimble” company.

“Hewlett-Packard Enterprise will accelerate innovation across key next-generation areas of the portfolio,” assured Whitman.
Weisler, for his part, called the split a “defining moment” in the industry as customers look for more “innovation to enable workforces that are more mobile, connected and productive.

“As the market leader in printing and personal systems, an independent HP Inc. will be extremely well positioned to deliver that innovation across our traditional markets as well as extend our leadership into new markets like 3-D printing and new computing experiences – inventing technology that empowers people to create, interact and inspire like never before,” said Weisler in a prepared statement.

HP said as a result of the split it was postponing its October 8 analyst meeting. However, it reiterated fiscal 2014 non GAAP diluted earnings per share outlook of $3.70 -$3.74 and updated its net earnings per share outlook to $2.60 – $2.64.

For fiscal 2015, HP said it expected non GAAP diluted net earnings per share outlook of $3.83 – $4.03 and GAAP diluted net earnings per share of $3.23 – $3.42.
The HP outlook for fiscal 2015, however, does not include one time GAAP charges the company is expected to incur in connection with the split including advisory and tax costs.

 

Sep 242014
 

September 24, 2014 – by Ron Miller – Tech Crunch

There are certain ways of doing things in hardware engineering, and engineers simply follow these rules because there’s no use fighting them even if they wanted to. Frankly, most don’t even think about it because it’s just a given, but during a recent tour of Facebook’s hardware lab, director of engineering Matt Corddry, says Facebook scale requires them to rethink the old rules and let engineers imagine outside industry standards.

Since Facebook has begun building much of its own hardware, that means engineers can rethink how things are done, and when you build equipment for the scale of Facebook, it requires creative thinking. “We understand our challenges, costs, operating environment and needs better than an outside vendor and we are able to specialize on the specific needs of Facebook,” Corddry told me recently.

Corddry explained that one of the ways Facebook encourages this creative thinking is to get engineers to work across specialties and talk to one another. “What I find is you need to give engineers learning experiences to find optimizations and break boundaries to get discussions across disciplines,” he said.

Facebook has found when engineers work together instead of in isolation interesting things begin to emerge. “Many silo these engineering teams –server, storage, database, [and so forth]. We don’t create these barriers,” he said. “We bring different teams together to find new ways to solve problems.”

Another thing they do, that many companies, even hardware manufacturers fail to do is they bring their engineers into their datacenters and they watch how people maintain the equipment they design. When you see someone taking out 6 screws (or 16) to replace a hard drive, and you need to do this hundreds of times a year, you begin to get that there needs to be a simpler way.

That’s how they designed the disk array shown here. It’s designed for easy maintenance with no screws. You simply flip the large green lever to slide out the array, pop the small green lever in front of the hard drive you need to replace, lift the hinged lid (as shown below), tug out the hard drive and pop it in a new one. There are no screws involved at all, and when you have Facebook scale, you need to be thinking about these types of issues.

He said that the disk array design I looked at had been iterated over time to make it as simple as possible to maintain. It’s possible they aren’t done yet.

In contrast, Corddry told me he spoke to one hardware engineer at unnamed vendor who admitted he had never watched a technician try to repair his design. When you don’t think about maintenance, it shows and we’ve all dealt with equipment with way too many small screws and poor placement. You end up with bloody knuckles from scraping against the inside of the machine and it’s not fun.

When you’re dealing with a few machines, that’s troubling. When you’re dealing with thousands or even tens of thousands, it takes on a different dimension.

Another way that the Facebook engineers thought outside the box (literally) was the server design. Corddry told me engineers are conditioned that the server design has to fit into a 2u rack, but he gave them permission to forget about that and imagine how they would design a server if there were no rack limitations.

When they left those limitations behind, it opened up all kinds of possibilities. Left to their own design devices, the engineers came up with a long narrow box. Facebook designed a rack to accommodate the new size. You can afford to do that when you’re Facebook. The box slides out of the rack and the top slides off –again no screws –and as you can see from the pictures below, the engineers laid out the box so that you could see the different parts at a glance. As one hardware engineer friend who looked at these pictures pointed out, some of the design principles still hold in terms of where you sit memory in relation to the CPU and the distance you want information to carry, but once again Facebook allowed its engineers to break the design mold and do what felt right.Inside the Facebook server.

Corddry said the idea is to put specialists with a narrow and deep focus together with broad generalists in a hackathon-style approach and let them have at these design problems and they really do come up with creative ways to solve what would have been much more difficult design problems.

It’s worth noting that when they are done producing these unique forms of hardware they open source them to the Open Compute project where the designs are delivered to a community of designers to further attack the scale computing problem and figure out ways to produce hardware that’s easier to maintain, runs much more efficiently and can be managed more cost-effectively throughout its lifecycle.

Sep 092014
 

September 8, 2014 – by Joseph F. Kovar and Tom Spring – CRN

Intel launched its latest Xeon family of processors that boasts as many as 18 cores, a 3X performance boost, and support for DDR4 memory for fast application performance in a world quickly moving to software-defined architectures.

The new processor families, known as Grantley, consist of the E5-2600 v3 and E5-1600 v3 processors. Intel unleashed 32 SKUs and an additional 20 custom SKUs designed for customers and their specific workload needs. The processors are available Monday, with a host of system vendors including rolling out servers in tandem with Intel’s news.

“We are very excited to collectively re-architect the data center with this new Xeon platform. With the new Xeon chips we move to a software-defined world — from static to dynamic, from siloed to open, and from proprietary to open standards running on standard architecture,” said Diane Bryant, senior vice president and general manager of the Data Center Group at Santa Clara, Calif.-based Intel, at a press event unveiling the new processors.

The Xeon chip highlights include baked-in monitoring and management features for automated deployment and improved servicing capabilities. The processor also emphasizes three workload types, including compute horsepower, storage optimization and the ability to juggle a variety network workloads. Another highlight includes a performance increase with up to a 3X boost in horsepower compared with previous Ivy Bridge generation processors.

The move to software-defined infrastructure is inevitable and necessary, Bryant said. Intel’s Xeon processors are optimized for that transition with nearly 50 SKUs optimized for specific customer needs, she said.

Intel’s Xeon chip news coincided with the flurry of new system announcements from Cisco Systems, Hewlett-Packard, Lenovo, Dell and IBM. Intel said that as of Monday’s launch there are 65 servers shipping from various OEMs with hundreds more to launch later this year.

Cisco, San Jose, Calif., refreshed its UCS server line with two major additions based on the Xeon processor. The first is a new line of modular servers dubbed the Cisco M-Series. The line features a new Cisco 2U chassis that fits up to eight compute modules, each of which consists of two independent Intel Xeon E3 servers. By putting 16 servers in 2U of rack space, Cisco is focusing on high-density implementations. And rather than including storage and networking on each server, the eight modules share four SSDs and dual 40-Gbit Ethernet connectivity.

Cisco also took the wraps off the UCS Mini, which uses existing B-Series blade chassis and blades, but adds the Cisco UCS 6324 Fabric Interconnect, a small device that plugs into the back of the chassis. The Cisco UCS 6324 provides network connectivity for up to eight Cisco UCS blade servers and seven direct-connect rack servers, giving it a total domain of up to 15 Cisco servers. The UCS 6324 replaces more expensive Cisco Fabric Extenders, making the UCS Mini more suitable to remote and branch office and edge computing applications.

HP, Palo Alto, Calif., told CRN prior to the Intel Grantley launch that it plans to introduce 21 new platforms as part of its ProLiant Gen9 family through fiscal year 2015, with the higher performance of the new processors to be married to RESTful APIs, a modular architecture to increase configuration flexibility, and HP’s OneView infrastructure management application for both physical resources and clouds.

Lenovo Monday showed it is pushing ahead on development of its own ThinkServer line despite the pending acquisition of IBM’s server business. The Beijing-based company is unveiling two rackmount and one tower server that use the ASHRAE A4 control standard to run at 113 degrees Fahrenheit continuously to significantly cut cooling requirements.

The new Lenovo servers also include the company’s AnyRAID solution, which allows any RAID adapter to connect to the server backplane; AnyFabric to allow choice of networking fabric; and AnyBay technology to allow the use of any SAS, SATA or PCIe drives. They also feature an M2 connector for miniature SSDs that can be used to decrease boot time.

Dell Monday touted flash storage and ease of management in its new 13G (generation 13) server line. One new model has a tier of 18 1.8-inch SATA SSDs paired with a tier of eight 3.5-inch spinning drives that takes advantage of the Microsoft Storage Spaces to manage the tiers. The Round Rock, Texas, company also added a common connector that allows SATA, SAS and PCIe flash and spinning drives to be easily added depending on requirements.

On the management side, Dell introduced ZeroTouch, which allows servers to automatically configure themselves once installed. Another new capability is iDRAC QuickSync, which Dell said makes it the first server vendor to provide NFC-enabled technology that allows an administrator to get server updates via a smart device just by tapping the device to the server. Dell also introduced iDRAC Direct, which automatically configures servers by plugging in a USB drive.

IBM, meanwhile, unveiled its M5 portfolio of x86 servers, with an emphasis on security, efficiency and reliability. As IBM, Armonk, N.Y., readies the sale of its x86 business to Lenovo, the company is taking additional steps to quell any concerns over security with a revamp of its Trusted Platform Module that includes more encryption features and support for a new Secure Firmware Rollback feature that prohibits any unauthorized updates of previous firmware versions.

IBM’s product mix includes System X servers that range in configuration from 1U to an all-in-one 5U, two-socket rack and tower designs. The new Xeon chips also will power new Flex System and NeXtScale systems aimed at high-density and energy-efficient enterprise workloads.

Aug 212014
 

Now that the technologies behind our servers and networks have stabilized, IT can look forward to a different kind of constant change

August 11, 2014 – By Paul Venezia – InfoWorld

One of the best and worst parts of IT is that it’s always changing. Day to day, week to week, the only constants are help desk calls from clueless users and, well, change. As time wears on, though, we might even see a shift in the clueless user department — we are well into the time where “I’m not a computer person” holds no water. However glacial this progress may seem, users are getting savvier. But I digress.

If you look at the struggles IT has gone through in the past few decades, you can see several clearly defined eras, each shorter than the last. Coming through to today, eras seem to be measured in mere months, not years.

In the 1970s through the early 1990s we had the era of the mainframe, with S/390s and AS400s running everything, and TN3270 green screens the predominate interface with computers. The 1980s saw the appearance of disruptive PC technology, which would obviously blossom into a wholesale global revolution by the mid-1990s, then combine with the Internet to fuel the single largest and fastest change to civilization in history.

In IT, we rode this wave of cataclysmic upheaval, building all the necessary parts along the way. Moving through the early 2000s, we were still refining and exploring this new world, making all kinds of questionable decisions as we charted a course through unknown waters.

Then the mobile revolution was upon us in the form of the first iPhone and all the subsequent fallout from there. I watched a 75-year-old man fiddling with his iPhone 5 the other day, pulling it out of his pocket and checking his text messages as if he’d been doing that since he was a teenager.

But in IT, we are actually seeing a bit of stasis. I don’t mean that the IT world isn’t moving at the speed of light — it is — but the technologies we use in our corporate data centers have progressed to the point where we can leave them be for the foreseeable future without worry that they will cause blocking problems in other areas of the infrastructure.

Within the course of a decade or so, we saw networking technology progress from 10Base-2 to 10Base-T, to 100Base-T to Gigabit Ethernet. Each leap required systemic changes in the data center and in the corporate network. We were forever replacing switching and routing gear, PC network cards, and so on. Network upgrades occurred almost yearly in some places. Now, with 10G cores and 1G to the desktop in the vast majority of corporate infrastructures, we won’t be doing any forklift upgrades for a long while. SDN is changing the game in the data center, but that’s still not the access layer. We have essentially achieved stasis there — for now.

Another area that has stabilized is server infrastructure. Virtualization has completely revolutionized server deployments, obviously, and we are now in a place where many pitfalls of server administration no longer exist. Where we once walked on tightropes every day doing basic server maintenance, we are now afforded nearly instant undo buttons, as snapshots of virtual servers allow us to roll back server updates and changes with a click. We aren’t straining under the weight of 100-pound, two-socket servers anymore, and the servers we rack and deploy carry a load that would have required several racks of hardware only a few short years ago.

Even the desktop system has changed completely. Gone are the bulky tower PCs that were constantly getting kicked under desks. Even if there isn’t a VDI infrastructure, desktop PCs are tiny and built such that they need little of the maintenance they formerly required. These days, many users want laptops, which are generally cheap and reliable.

Of course, the cloud explosion has eliminated many internal services completely, as long as you’re willing to place a certain amount of critical data and applications in the hands of others. The Application Service Provider pipe dream from the year 2000 is finally reality.

What all this means for IT is not that we can finally sit back and take a break after decades of turbulence, but that we can now focus less on the foundational elements of IT and more on the refinements. We can collectively direct our attention away from rinse-and-repeat network and server overhauls and toward extending the functionality of our computing infrastructure, at least in the corporate data center.

In essence, we have finally built the transcontinental railroad, and now we can use it to completely transform our Wild West. This isn’t a period of stasis, but the launching pad for the next revolution.

It’s sure to be a heck of a ride.

Oct 302013
 

October 25, 2013 –  Serdar Yegulalp – InfoWorld

When was the last time we loved HP for making a piece of hardware that wasn’t just a notebook? Too long, it seems.

The company that once made the best laser printers (and calculators and scientific equipment) may have found something new to sink its teeth into: 3D printing.

As originally reported by The Register, HP CEO Meg Whitman spoke in Bangkok at the Canalys Channels Forum about how the company wanted to enter the 3D printing market in 2014 and “lead this business.”

Her comments hinted at how 3D printing could be made far less time-consuming: “To print a bottle can take eight to 10 hours. That’s all very interesting, but it is like watching ice melt.”

Given the venue, many of her comments were clearly aimed at businesses rather than individuals. But having a company the size of HP sink its teeth into a technology problem like 3D printing is a way to all but guarantee it’ll become a commodity technology.

HP produced a 3D printer back in 2010 under the Designjet brand, a label HP normally uses for their wide-format printers and plotters. But with its $17,000 price tag, it was clearly aimed at the corporate and high-end industrial market. It didn’t stand to make much of a splash with the same crowd that could pick up a MakerBot Replicator 2 for $2,199.

But $2,199 is still a lot of money. A big part of what could further drive down the cost of 3D printing wouldn’t just be cheaper printers, but a larger net of support for them. Color printing has gone from a costly luxury to casual availability for the end-user, in big part thanks to a whole subindustry that provides the inks.

HP could follow a similar route and supply not just the printers, but create a whole ecosystem to support them and further drive down costs. That would include the raw materials, the designs (especially those that require licensing), and so on. It’s not a feat HP could accomplish casually, but it would show a commitment to driving down prices across the ecosystem.

There’s little question HP is entering a market that may already be dominated from the bottom up, though. The sheer number of 3D printing devices that are crowdfunded is proof of that: the QU-BD One Up, the Helix, and the Asterid. But there’s always room for competition: MakerBot, one of the few household names in the space, was recently purchased by another 3D printer maker, Stratasys, for some $403 million in stock.

Jul 122013
 

July 4, 2013 – Dave Ohara – www.greenm3.com

Here is a question. Who are you focused on if you want to achieve long term customer satisfaction of a Data Center build or lease?

Most would focus on the decision makers of the initial project. But, too many times the people who start the project are not the ones who live with the decision made. And worse case the team making the initial data center choices are optimizing for their budget and internal visibility vs. the long term cost, operations, and availability of the data center. Any problems in operations can easily be diverted by saying that the operations team is at fault, and the design was perfect.

I always watch out for those who make it seem like their designs are perfect and don’t have issues. Any good design has trade-offs. And, some of those trade-offs may not be the ones you may make. A high availability data center will have higher costs to build and operate. An energy efficient design may have higher inlet temperatures which makes it hard for legacy systems to be accommodated. There is no perfect car. Especially for everyone. There are no perfect data centers. People are most proud of their acquisition within the first months and they talk about how it is the best data center as if they are Donald Trump showing his latest building. After a year the novelty wears off.

Except… There are a set of people that will show off their data center years after it was commissioned.

Who? The operations team who take pride in their work. Those who had an active role during construction and have a loud voice in operations are way more likely to be proud of their data centers. These are the people who will tell their peers about the vendors used, procedures, best practices, and the issues they have run into.

If people spend more time focusing on the data center operations team then there is a good chance you’ll increase customer satisfaction.

In the data center industry the big are getting bigger. The small are folding their operations into the cloud. The middle is silent as they get squeezed in markets, margin, and find it hard to compete. In this shift, the role of data center operations will grow.

Oct 242012
 

HP has announced the discontinuation and end of life for their G7 servers and options, as has IBM for their M3 servers and options.

Supply in traditional distribution has already started to become scarce as the manufacturers are incentivizing the distributors to move customers to new Gen 8 or M4 product. As in the past, there will be a period of time where many of your customers will have a demand for the previous generation product. Reasons for this include working with existing certifications, a preference for a price point they’ve standardized on, and compatibility with their existing infrastructure.

Arbitech has taken a large stocking position on the G7/M3 equipment to support those customers looking for lifecycle enhancement. These products have the same warranty and are in retail condition. Because of our independent status, we can focus on carrying the product you and your customers want rather than what the manufacturers dictates to sell.

Please contact your Arbitech representative if your customers have requirements where the previous generation is the preferred option.

Sep 262012
 

We work in an industry where a single idea can revolutionize the landscape overnight, which is why all of us at Arbitech are committed to staying as far ahead of the curve as we can. Last month, Arbitech executives attended the VMworld Conference; a gathering place to exchange ideas and some of the best practices on virtualization and cloud computing. The conference, held at the San Francisco Moscone Center on August 26 –29, attracted some of the biggest names in the tech industry.

Arbitech sent many of its leaders, all conference veterans: Ian Shively, Principal Architect, went for the fourth year in a row; Andrew Stewart, Sales Manager, attended his third consecutive conference; Arbitech Chairman and CEO Torin Pavia attended for the second time; and Jimmy Whalen, VP of Business Initiatives, made his third visit to VMworld.

The Arbitech team attended more than 55 hours of breakout sessions and visited nearly 80 manufacturer and vendor booths to learn about the new technologies and products that would soon be in the marketplace. In a hands-on lab, they also spent five hours learning about VMware’s core products and their enhancements to their industry’s leading software. The Arbitech team honed their skills on innovative technology including: VMware’s flagship ESXi products, the newly released VDI product, View 5.1 and offerings from SRM, VSA, vCloud director, NICO, SIOC and many others.

Arbitech is always looking for ways to address the changing needs of their customers with being in such a complex industry. By attending industry conferences such as VMworld, it keeps them abreast of the latest changes.

Aug 092012
 

In the new era of information technology, it is possible to store and access data from wherever you are. Remote machines control the network of computers necessary to make accessing and storing data from anywhere possible. This is cloud technology, created to introduce a revolution in the IT department.

With the introduction of the cloud computing technology, there is a considerable change in the workload. The personal computers do not have to manage all the heavy workload needed to run the required applications. Instead, the network of computers, which constitutes the server of cloud computing, handles all the workload. You don’t need loads of hardware and software to be able to take advantage of cloud computing. All you need is the interface software to access the cloud computing server, which can be anything such as a Web browser. Once you have accessed the interface software, the cloud computing server will do the rest of the job.

Anyone who has used Internet must have used a kind of cloud computing technology. The most famous kind of cloud computing technology is an email service provider such as Gmail. You don’t run an email program on your personal computer, and neither do you store your email on your personal computer. Instead, you have to use the Internet to access your email remotely. With cloud technology becoming more and more popular nowadays, everyone wants to know what cloud technology will be like in the future?

Before discussing the future of cloud technology, let’s discuss its drawbacks. The main reason most people and businesses are not using cloud technology is its security. Security of the cloud technology is not up to the standards of what an individual or a company would require. However, the fact that cloud technology is still in its initial stages should not be ignored. Engineers are still doing work on making cloud technology perfect, and this issue may be resolved soon.

Now, let’s discuss the possible applications of cloud technology. Many technological experts think that cloud technology will change the whole world in the future. Currently, people can use cloud technology only as a storage device, but it is expected that in the future, cloud technology will become so involved in our lives that the line between the real and virtual world will become blurred. Cloud technology will definitely change our lives, hopefully for the better.