Friday, November 9, 2012

Lenovo sees profit growth sag in fiscal Q2


Lenovo said Thursday its net profit for the fiscal second quarter increased by only 13 percent year-over-year, marking a shift from the high profit growth the company has previously seen.
For the fiscal second quarter ending on Sept. 30, Lenovo's net profit reached US$162 million. Revenue for the quarter was a record US$8.7 billion, a year-over-year increase of 11 percent.
In its fiscal first quarter, Lenovo reported 30 percent year-on-year growth in net profit. Last year, the company saw profit growth almost doubling year-over-year in some quarters.
Lenovo has consistently reported solid growth in its product shipments despite slowing growth in the PC industry, and competition from Apple's iPad which has cut into sales of laptops.
The Chinese company increased PC shipments in the fiscal second quarter, and was named theworld's largest PC vendor by research firm Gartner. Research firm IDC, however, still ranks rival HP as the top PC vendor with a slight lead ahead of Lenovo.
During the quarter, Lenovo reported that its PC shipments grew year-over year by 10.3 percent. Lenovo has credited the growth to its "protect and attack" strategy, with the company maintaining its dominant position in China's PC market, while expanding into new emerging markets. In China alone, the company's revenue reached $3.9 billion, accounting for 44 percent of Lenovo's global sales.
The company also saw growth in its Mobile Internet Digital Home group, which sells smartphones. The business group saw revenue reach $718 million in the quarter, up 155 percent from the same quarter last year. In China, Lenovo's smartphone shipments reached second place, behind Samsung, according to research firm Canalys.
Lenovo announced during the quarter it would begin selling smartphones in Indonesia, Philippines, Vietnam and India. Microsoft's Windows 8, which was formally launched only two weeks ago, could also help grow PC sales.

Hitachi releases new 1.6TB flash modules


Making good on the flash strategy it announced in August, Hitachi Data Systems has unveiled its first flash module, a 1.6TB SAS-interface flash card.
Three months ago, HDS lifted the covers on its flash strategy saying that like EMC, it will put NAND flash products in servers, storage and appliances in order to enable compute acceleration, caching and high-performance storage.
Hitachi's Accelerated Flash Module
The new modules and accompanying flash chassis is being marketed for use in enterprise-class mission critical applications such as online transaction processing (OLTP) and financial data and metadata indexing.
The company is calling its solid-state platform Hitachi Accelerated Flash storage.
At the heart of the Hitachi Accelerated Flash storage is a proprietary flash controller, a CPU with firmware that manages its multi-level cell (MLC) NAND flash-based storage modules.
"We will not be dependent on any vendor per se for the SSDs [solid state drives]. We can use any. If tomorrow Samsung comes up with a drive that has four times the capacity of today's NAND or Toshiba comes up with 8X NAND, we can use that," said Roberto Basilio, vice president of Infrastructure Platforms Product Management at HDS.
HDS's controller is a multi-core, high bandwidth architecture with up to 128 flash DIMMs (dual in-line memory module).
HDS is currently offering a 1.6TB flash module. Next quarter it will add a 3.2TB module. Following that it plans to offer a 6.4TB flash module.
By comparison, flash storage maker Virident offers a flash module called a FlashMAX that is available in both single-level cell (SLC) and MLC NAND flash and range in capacities from 550GB to 2.2TB. The MLC-module can generate 325,000 random read IOPS (using 4K blocks) and one million IOPS using 512 byte blocks). The SLC card is able to generate up to 340,000 IOPS using 4K blocks and 1.4 million IOPS using 512 byte blocks.
Hitachi Accelerated Flash storage also uses a new 8U-high (a U or unit equals 1.75-in) flash chassis that holds up to 48 drives, a rack-optimized flash module drive (FMD) and associated interconnect cables. The new flash chassis is a set of four drives per 2U-high tray.
Each enclosure can scale from 6.4TB up to 76.8TB of flash storage, giving it 2 times greater density than the largest MLC SSD available today, Hitachi said. Up to four flash enclosures can be housed in Hitachi's high-end Virtual Storage Platform (VSP) array, enabling more than 300TB of flash per system.
The flash storage can be configured for RAID 1, RAID 5 or RAID 6.
The single 1.6TB module, which uses a 6Gbps SAS 2.0 interface, can perform just over one million random read I/Os per second (IOPS) using 8K block sizes and 270,000 random write IOPS, HDS said.
HDS said it would not provide pricing for its drives or storage platforms, "but based on our calculations, Hitachi Accelerated Flash storage is up to 46% lower in cost when compared to an MLC SSD of similar capacity," a spokesman wrote in an email reply to Computerworld.
Hitachi Accelerated Flash storage introduces several new capabilities including inline write compression that speeds writes on flash and improves MLC flash memory endurance. When compared to standard 400GB MLC SSDs, Hitachi Accelerated Flash storage has four times better performance, improved environmental characteristics (power and space), the company said.
Hitachi Accelerated Flash storage is fully compatible with all Hitachi VSP features, including Hitachi Dynamic Tiering (HDT), which allows data to be moved to different tiers of storage based on use patterns.
"Today's announcement is a milestone achievement in how flash technology will be used in the enterprise data center moving forward," Basilio said in a statement. "Hitachi Accelerated Flash storage is the first flash device that is optimized for the performance and reliability required for mission-critical applications."
How HDS's flash modules broken down by drive, enclosure and chassis and how they would fit into an HDS VSP array.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is lmearian@computerworld.com.
Read more about storage hardware in Computerworld's Storage Hardware Topic Center.

Verizon expands Lync service to hosted operations, management of UC


Verizon announced today it is extending its managed services offering for Microsoft Lync Server to its business customers by adding the ability to operate, monitor and manage unified communications and collaboration (UC&C) servers and functions.
Since 2010, Verizon has been assisting businesses with UC&C planning, design and implementation with Lync but decided to expand its service with operations and management of Lync from its own network operations centers, said Bob Riley, senior consultant for UC&C at Verizon Enterprise Solutions, in an interview.
The hosted service, called Managed UC&C for Microsoft Lync Server 2010, is available now in the U.S. and 19 European countries. Verizon will serve even the largest multinational businesses whose mobile workers use such services as instant messaging, collaboration over the Web and voice and videoconferencing.
Riley said the price of the service will vary by the number of Lync servers under management and the number of users. He said Microsoft estimated the cost at $7 per worker per month.
By operating and managing the Microsoft System Center Operations Manager gateway servers on a business location, Verizon hopes to distinguish itself from other Lync services providers, Riley said. SCOM is a hub and a collection point for data from various UC&C Lync servers that range from SharePoint to Exchange to a SQL Server as well as a Verizon voice gateway.
Demand from enterprises for a range of UC&C capabilities is high, especially to improve worker productivity and lower costs, Riley said. With the new capability, a large company can use instant messaging, audio, video and Web conferencing and even desktop videoconferencing through Lync, along with voice services over Verizon's global IP network. That IP network offers Voice over IP and the Session Initiation Protocol (SIP), which are both integrated into the public switched telephone network.
In addition, the new management service can be combined with Verizon SIP Trunking connections (offered since 2007) with managed session border controllers for better network security and reliability. Verizon uses a variety of third-party vendors that have partnered with Microsoft to round out its UC&C services capability, including Acme Packet, a provider of session border controllers and Polycom, a videoconferencing provider.
Verizon also offers hosted services using the Cisco Hosted Collaboration Services capability and others, but Lync is desired by many companies that already incorporate Microsoft applications and software throughout their IT operations, Riley said.
Read more about unified communications in Computerworld's Unified Communications Topic Center.

Amazon, Microsoft and Google targeted by cloud provider Joyent


Joyent may be the biggest cloud provider you haven't heard of.
According to the pure-play infrastructure as a service (IaaS) provider -- which was founded in 2004 and is headquartered in San Francisco -- it is a top 5 vendor of cloud-based virtual machines in the world, a stat that's backed up by Gartner. That means it's rubbing elbows with the big names of cloud computing -- Amazon Web ServicesRackspaceMicrosoft and Google.
"They're the most interesting cloud company that few people talk about," says George Reese, CTO of enStratus, a company that consults with enterprises on cloud strategies and helps business deployapplications to the cloud. "When we talk to people we get questions about AWS, Rackspace, HP, and when we mention Joyent, they're like, 'Who?'"
Perhaps users should start paying attention, though. The company this week released Joyent7, the latest version of its cloud management platform named SmartOS, which it says enhances capabilities for hybrid cloud deployments between a customer data center and Joyent's cloud.
Company founder and CTO Jason Hoffman is aiming for the fences with his company, openly stating that he's looking to take on the Amazons, Googles and Microsofts of the world.
Does he have a shot?
Joyent's differentiator, Hoffman says, is its integrated stack. SmartOS is not just an operating system, but also a networking fabric and hypervisor -- it uses KVM. He describes it as analogous to a large-scale storage area network (SAN), with an integrated network between compute and data layers that run virtual machines directly on it. "We completely collapse the model into a single hardware design," he says. By doing this, new customers are easily onboarded to the cloud, with each new customer site added to Joyent's network being like the equivalent of adding another availability zone in AWS's system.
Hoffman says Joyent is cheaper and offers more compute for the buck compared to AWS. A pricing comparison chart on the company's website shows that Joyent prices are between 6% and 29% less compared to prices of similarly sized VM instance types in AWS's cloud.
Reese, the cloud consultant, says Joyent seems to have a dedicated user base, but it is still a niche play in the market. "They don't have a ton of features, but the features they do have perform really well," Reese says. VMs come up fast and are predictable and reliable, he says, based on testing he's done within enStratus for customers using Joyent's cloud.
Joyent seems optimized for customers that run large, complex, cloud-native apps in Joyent's cloud, apps from which developers want high visibility and highly reliable performance, Reese says. The focus on its core features leaves some wanting, though. Joyent doesn't have a database as a service feature, for example, nor does it have nearly the breadth of services offered by AWS or Rackspace. Ultimately, that could provide a challenge for Joyent significantly biting into Amazon or Rackspace's dominating market share.
Joyent is continuing to develop its products and company, though. The release of Joyent7 is about enabling "seamless hybrid cloud," Hoffman says. The new OS furthers LDAP integration and adds a catalog of APIs, specifically around workflow management, image management and security groups.
In addition to announcing Joyent7, the company also appointed a new CEO, Henry Wasik, formerly president and CEO of Force10 Networks, to lead the company.
Hoffman likes his chances of going up against the gorillas of the industry. "If someone really wants to take on AWS," which Hoffman clearly states he wants to do, "you have be multi-region, multi-AZ from the get-go." If a provider takes a pure-hardware approach, it says it would cost a half billion dollars to set it up. "We're in a space where, as a private company, we're partnering with a top-three chip maker [Intel], we have our own technology stack end-to-end and we've raised hundreds of millions of dollars." The company announced its latest $85 million funding round in January.
Gartner says it will be an uphill climb for Joyent, though, especially when it's competing with companies that have much greater resources they can devote to R&D. "Joyent is focused on developing its own technology, which creates long-term challenges in competing against providers with greater development resources," Gartner says. If Joyent remains a niche provider, Reese believes it has a chance to carve out a chunk of the market and serve it well. It's an open question if a company like Joyent can scale up to the size of some of the major cloud providers in the market, though.
Network World staff writer Brandon Butler covers cloud computing and social collaboration. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW.

Citrix and NetApp simplify on-premises data sharing


Citrix Systems and NetApp have jointly developed a software and hardware package optimized for Citrix's ShareFile with StorageZones.
ShareFile with StorageZones is Citrix's enterprise-friendly answer to cloud-based file storage and sharing services like DropBox, Apple's iCloud and Google's Drive, and allows CIOs to place data in the organization's own data center as opposed to in the cloud.
Enterprises can now meet compliance and data sovereignty requirements, while users can still access their documents and images from anywhere at any time. Storing data close to users also helps improve performance, the two companies said on Wednesday.
By joining forces, Citrix and NetApp hopes to help enterprises simplify and accelerate on-premise, large-scale data sharing and storage deployments, they said. The two have seen to it that StorageZones works with NetApp's FAS and V-Series storage systems running its clustered Data ONTAP software.
The integration lets enterprise take advantage of features such as de-duplication and compression to decrease the amount of storage needed to host employee content.
De-duplication does that by eliminating duplicated content, so if two users are storing the same image there only needs to be one copy.
NetApp's Snapshot technology can be used to backup and recover the data.
This is the second time in less than a month Citrix and NetApp have joined forces to make life a little easier for IT staff.
Last month the two companies announced they are working on integrating NetApp's storage software with Citrix's CloudPlatform and the Apache CloudStack project, offering features such as storage automation and virtual machine backup and recovery.
On Wednesday, Citrix also announced two new NetScaler MPX hardware appliances aimed at smaller enterprises, it said.
The MPX 5550 and MPX 5650 cost from US$14,000 and are now shipping. They can be used for load balancing, SSL processing and traffic inspection to protects against threats such as cross-site scripting and SQL injections.
Send news tips and comments to mikael_ricknas@idg.com

Oracle buys Instantis for project portfolio management software


Oracle on Thursday said it has agreed to acquire PPM (project portfolio management) software vendor Instantis, in a move that will build upon its past acquisition of Primavera. Terms of the deal, which is expected to be completed this year, were not disclosed.
Instantis has both on-premises and cloud-based software, which will be combined with Primavera as well as Oracle's next-generation Fusion Applications, according to a statement. All told, the software "will provide the ability to manage, track and report on enterprise strategies - from capital construction and maintenance, to manufacturing, IT, new product development, Lean Six Sigma, and other corporate initiatives," Oracle said.
Instantis' main product is called EnterpriseTrack, which incorporates dashboards and reports that can be shared "at any phase of the project life cycle from ideas to proposals to project execution to metrics and results," according to its website. The company's software also has a native social networking platform called EnterpriseStream, as well as an integration framework for tying EnterpriseTrack to other systems.
Oracle "plans to continue to invest in Instantis' technology, evolving the solutions organically and deepening the integration capabilities with Oracle technology," according to a FAQ document released Thursday.
As with all of its acquisitions, Oracle will also gain further footholds in enterprise accounts, giving its sales representatives opportunities to cross-sell and up-sell other products to Instantis users, which include Ingram Micro, DuPont, Credit Suisse and Xerox.
Oracle's competitors in the PPM market include CA Technologies, IBM and a number of smaller vendors.
Chris Kanaracus covers enterprise software and general technology breaking news for The IDG News Service. Chris' email address is Chris_Kanaracus@idg.com

The cloud as data-center extension


A year after Oregon's Multnomah County deployed an on-premises portfolio management application, the two IT staffers dedicated to it resigned. Other staff struggled to maintain the specialized server environment. Left with no other option to guarantee support of the mission-critical tool, the county leapt into the cloud.
"All of our IT projects are tracked through Planview," says Staci Cenis, IT project manager for Multnomah County, which includes Portland. "We use it for time accountability and planning. Monitoring scheduled and unscheduled maintenance shows us when staff will be free to take on another project."
Initially the county had two dedicated Planview administrators, Cenis explains. But over a period of around three months in 2009, both left their jobs at the county, "leaving us with no coverage, " Cenis says. "We didn't have anyone on staff that had been trained on the configuration of our Planview instance or understood the technical pieces of the jobs that run within the tool to update the tables," among other things.
Cenis hadn't considered the cloud before that issue, but agreed to abandon the in-house software in favor of Planview's software-as-a-service (SaaS) offering after assessing the costs. Training other IT staffers on server, storage, backup administration, recovery and upgrades alone would have compounded the on-premises software expenses, Cenis says.
Nowadays, with the infrastructure and application administration offloaded to the cloud, IT can handle most configuration, testing and disaster recovery concerns during a regularly scheduled monthly call. "I wish we had gone with the cloud from the start because it has alleviated a significant burden," Cenis says, especially in the area of software upgrades.
Each upgrade handled by the application provider instead of her team, she estimates, adds numerous hours back into her resource pool. "What would have taken us days if not weeks to troubleshoot is generally answered and fixed within a day or two," she adds. At the same time, users can access the latest software version within a month or two of its release.
Multnomah County's embrace of the cloud is one of five models becoming more common today, according to Anne Thomas Manes, vice president and distinguished analyst at Gartner.
Gartner categorizes them as follows:
Replace, as Multnomah County did by ripping out infrastructure and going with SaaS;
Re-host, where IT still manages the software, but it is hosted on external infrastructure such as Amazon, HP or Rackspace public or private cloud servers;
Refactor, where some simple changes are made to the application to take advantage of platform-as-a-service;
Revise, where code or data frameworks have to be adapted for PaaS;
Rebuild, where developers and IT scrap application code and start over using PaaS.
"Not a lot of companies rebuild or do a lot of major modifications to migrate an application to the cloud. Instead, they either replace, re-host or refactor," Manes says.
Primarily, enterprises view the cloud as an escape hatch for an overworked, out-of-space data center. "If you're faced with the prospect of building a new data center, which costs billions of dollars, it certainly saves money to take a bunch of less critical applications and toss them into the cloud," Manes says.
Problems in paradise?
However, since first observing the cloud frenzy years ago, Manes recognizes companies have taken their lumps. "Many business leaders were so eager to get to the cloud that they didn't get IT involved to institute proper redundancy or legal to execute proper agreements," she says. Such oversights have left them vulnerable technologically and monetarily to outages and other issues.
Companies that moved applications and data to the public cloud early on also didn't always plan for outages with traditional measures such as load balancing. "Even if an outage is centralized in one part of the country, it can have a cascading effect, and if it lasts more than a day can cause a real problem for businesses," she says.
Tips for getting to the cloud
Know what should go where: If you require a more controlled environment for your data, consider building a hybrid cloud using internal servers and shared dedicated cloud infrastructure. Doing so enables you to track where data lives without having to manage a sprawling data center.
Understand your licensing: Some companies are unwittingly getting double-charged by software companies and service providers for application, operating system and other licensing. Double-check your contracts and if yours doesn't include cloud architecture, then renegotiate with your vendors. Consult with your cloud provider because it might have an in-place deal with software makers. Also, as NASA's JPL advises, make sure to involve your legal team in all service agreements.
Stay involved: Sending your applications to the cloud might free up infrastructure and administrators, but IT still has to keep a close eye on critical elements such as security, integration, configurations, updates and disaster recovery. Multinomah County regularly meets with its SaaS provider to ensure proper communication and support levels.
Missing something? Don't be afraid to ask: Cloud providers are eager to please and want your business. Inform your cloud providers when a feature or functionality is absent from your service or platform. If you need load balancing, a provider probably will support that for you without much additional cost.
Seek support: You can offload cloud management to a third party if it is too onerous for your in-house team. For instance, some cloud providers will handle round-the-clock technical support of environments hosted in the Amazon cloud.
-- Sandra Gittlen
But Dave Woods, senior process manager at business intelligence service SNL Financial, disagrees. SNL Financial aggregates and analyzes publicly available data from around the world for its clients. Despite having a sizeable internal data center, the company's homegrown legacy workflow management application was testing its limits.
"Our data center was full" with both internal and customer-facing applications and databases, Woods says. The company didn't do a full-on analysis to find out whether it was server space or cooling or other limitations -- or all of the above -- but at some point it became clear that they were running out of capacity, and cloud software became attractive.
Though he briefly considered rebuilding the application and building out the data center, the costs, timeframe and instability of the code dissuaded him. "The legacy application lacked the design and flexibility we needed to improve our processes," Woods says. The goal, in other words, was not just to rehost the application but to do some serious workflow process improvement as well.
To accomplish this, SNL Financial adopted Appian's cloud-based business process management system. Although the annual licensing cost was similar to the on-premises software the firm had been using, the clincher was avoiding the $70,000 in hardware costs that would have been needed to update the application at the time. (SNL has since built a "spectacular new onsite data center," Woods says, so it's no longer an issue.)
SNL Financial is expanding its workflow processes to more than 500 banks in Asia, with Woods crediting the cloud for allowing this type of scalability and geographic reach. "We wouldn't have been able to improve our legacy workflow in this way. There was a much longer IT development life cycle to contend with. Also, the application wouldn't have had as much capability," he says.
"These platforms are mission-critical to us, not a side project," Woods explains. "They affect our business engine at our core and they have to enable us to fulfill our timeline guarantees to our customers," he says.
The processes Woods refers to are those involving collecting, auditing and reviewing data and news for specific industries -- the information that SNL sells to clients, in other words.
That's not to say there haven't been some bumps on the road to the cloud. Woods says that while IT was brought in at the start of the decision-making, his process-improvement team missed the mark on making sure IT was fully informed. "We found that no matter how much we thought we were doing a good job communicating with IT and networking, over-communication is the order of the day," he says.
Building up trust in the cloud
NASA's Jet Propulsion Laboratory (JPL) has a similar stick-to-it attitude with the cloud. With more than 100 terabytes spread across 10 different services, JPL's trust in the cloud built up over time.
Its first foray was in 2009, when reality sunk in that the 30-day Mars Exploration Rover (MER) mission would last far longer than originally thought, and demand far more resources than the internal data center could handle. (MER is still sending data back to Earth.)
"All of our IT systems had filled up. We either needed to build new IT systems internally or move to the cloud," says Tom Soderstrom, CTO.
Soderstrom and his team of technicians and developers used Microsoft's then-nascent Azure platform to host its "Be a Martian" outreach program. Immediately, JPL saw the benefits of the elasticity of the cloud, which can spin up resources in line with user demand.
In fact, outreach has proven a fertile playground for JPL's cloud efforts, such as using Google Apps as the foundation for its "Postcard from Mars" program for schoolchildren. Soderstrom calls the platform ideal because it enables an outside-the-firewall partnership with developers at the University of California, San Diego.
External developers are simply authorized in Google -- by JPL's IT group -- to work on the project. "If we used the internal data center, we would have had to issue them accounts and machines, get them badged by JPL, and have them go into schools to install and manage the application code," Soderstrom says. "The cloud approach is less expensive and more effective."
JPL also taps Amazon Web Services for various projects, including its contest for EclipseCon, the annual meeting of the Eclipse open-source community. "All testing, coding and scoring is done in Amazon's cloud so our internal data centers don't have to take the hit," he says.
The cloud benefits internal projects, too, including processing data from the Mars missions. To tile 180,000 images sent from Mars, the data center would have to spin servers around the clock for 15 days or more. JPL would have to foot the cost of that infrastructure and spend time on provisioning specifications down to the type of power plug required.
In contrast, the same process took less than five hours using the Amazon cloud and cost about $200, according to Soderstrom.
As cloud use grows in popularity and criticality, JPL continues to beef up its cloud-based disaster recovery/business continuity, using multiple geographic zones from a single service provider as well as multiple vendors. "We always have failover for everything and consider it as insurance," he says. For the summer Mars landing, JPL instituted a double-failover system. "All cloud vendors are going to have outages; you just have to determine how much failover is required to endure it," he says.
For its data on Amazon, JPL switched on load balancers to move data between zones as necessary. "Previously, network engineers would have been needed to do that kind of planning; now app developers can put in these measures themselves via point and click," Soderstrom says.
Self-service provisioning
There have been hiccups along the way, such as trying to match the application to the cloud service. "Cloud services used to be a relationship between a provider and a business leader with a credit card," Soderstrom says. Now, "we make sure IT is involved at every level," he explains.
To accomplish this, JPL has standardized its cloud provisioning overall, creating an online form that business leaders and developers fill out about their project. Based on pre-set templates created by IT, their plain-English answers to questions such as "are you going to need scalability?" and "where is your customer and where is your data?" guide which cloud service and the level of resources they will need.
The move to self-service provisioning has meant retraining system administrators to be knowledgeable about cloud-use cases. Also, IT security staffers serve as consultants for the cloud environment, vetting and hardening operating system and application builds.
Though this sounds like a complicated evolution, Soderstrom says the technical challenges presented by the cloud have been easy compared with the legal ones. Legal is front and center in all negotiations to ensure appropriate licensing, procurement and compliance deals are struck and adhered to.
In all its cloud contracts, JPL includes language about owning the data. In case of service shutdown, a dispute or other agreement termination, the provider must ship all data back on disks, with NASA picking up the labor tab.
Overall, though, Soderstrom says he is glad he made the leap. "Cloud is changing the entire computing landscape and I'm very comfortable with it. Nothing has been this revolutionary since the PC or the Internet."




Why isn't Microsoft's answer to Siri built into Windows 8?


Windows 8 is supposed to be Microsoft's majestic OS reseta dramatic overhaul designed to usher the Windows platform into the age of mobility. And Windows 8 is also Microsoft's bid to achieve feature parity with iOS and Android, the other two OS powerhouses in the mobile universe.
But one key featureone hot, relevant, rock-star-caliber featureis conspicuously absent from the Windows 8 repertoire: Intelligent, semantically aware voice control is nowhere to be found in the new OS.
iPads and iPhones have a voice dictation button built right into their virtual keyboards. And Google integrated its own set of deep voice control features into the Jelly Bean version of Android that was released earlier this year. So how come voice control isn't a forward-facing, marquee feature of Windows 8?
The short answer is that voice-control technology hasn't made it to laptops or desktops in a meaningful way for either PCs or Macs, and Windows 8, at least for the short run, is much more of a computer OS than a tablet OS.
In Windows 8 (as in Windows 7 and Vista), speech recognition remains relegated to the role of anassistive technology designed to help disabled customers use their PCs. The Windows Voice Recognition (WVR) feature in Vista and Windows 7 allowed users to control a few minor OS behaviors with their own voices, and users could also dictate text, all with varying degrees of success.
Relative to Windows 7, Windows 8 offers incremental accessibility improvements, but also demonstrates that there's no real desire on Microsofts part to make voice control a major feature of the OS. Windows 8 can recognize your voice if you're using a microphone and can carry out some simple commands, but it doesn't offer anything approaching the voice-controlled "personal assistant" experience that we find in Apple's Siri.
A missed opportunity
Microsoft didnt always show so little interest in voice control. The software giant introduced Windows Speed Recognition (WSR) in Windows Vista, and at the time seemed very interested in putting all Windows users on speaking terms with their computers. The company also demonstrated a feature called Windows Speed Recognition Macros," which enabled the OS to perform certain repetitive tasks in response to a voice command. Unfortunately, the feature required users to write their own macros (i.e. "open file" etc.), and, as a result, WSR was mostly used by advanced users.
Microsoft bought the voice portal company TellMe in 2007, and appeared poised to use the voice recognition technology it received in the deal to put voice command into Windows. But it was not to be. The TellMe technology ended up being used mainly for voice commands in Windows Phone 7 and 8.
Siri's influence
For many of us, the iPhone 4Ss Siri feature was our first experience with a voice-recognition system that did more than just transcribe words and open windows. Indeed, Siri is something much deeper than a voice-recognition tool. It's a personal assistant that understands relatively nuanced wording, and performs many of the tasks we ask of our smartphones.
Siri lets us compose and send text messages and emails using voice alone. We can use it to schedule meetings, ask for directions, set reminders, and so on. And when it comes to search, Siri uses semantic technology to understand information requests spoken in plain English, like, What is the largest city in Texas?
Apple and Google are already racing to perfect semantic voice control for use in mobile devices, and Microsoft could have jumped in the fray as well, reviving voice recognition as a major feature in Windows 8. In fact, Microsoft could have leap-frogged over the competition by bringing semantic voice control to the desktop. This could have been the killer feature that persuaded legions of skeptical XP and Windows 7 users to make the jump to Windows 8.
Laptop and desktop PC manufacturers could have benefited greatly too. The industry is desperate to curtail sliding PC sales as more and more users show an interest in tablets.  Intelligent voice recognition for laptops and desktops could have been the sticky feature that product managers crave.
Unfortunately, as it stands, PC manufacturers believe consumers primarily want voice command on their mobile devices, and are fine with manual keyboard control for their PCs.Most of the [voice control] R&D momentum is going to serve the mobile marketsmart devices, namely phones and tablets, where there appears to be, at least in the short term, no end in demand, says analyst Patricia Kutza of tech market research house BCC Research.
Voice for Ultrabooks
Intel, not Microsoft, may end up being the first big proponent of voice recognition in the PC industry. The chip maker has already worked with voice-recognition technology company Nuance to develop a voice recognition app for Ultrabooks called Dragon Assistant. Dragon Assistant runs natively on the computer, and can interact with third-party apps to do things like find and play music, compose emails, surf the web, watch video and use social media, among other Siri-like talents.  Nuance is currently the leading developer in the voice-recognition market. And it's an open secret that Nuance developed large parts of Siri (Apple has confirmed only that Nuance is a technology partner). The company also developed the VR system in Fords Sync in-car systems.
Nuance came into the voice control business by making Dragon Naturally Speaking, the best selling desktop dictation application on the market. Naturally Speaking also provides detailed web browsing for disabled people via voice commands. Nuance has since expanded the functionality of the product to allow users to do more things on the PC using voice. The company says it has a strong interest in bringing a Siri-like experience to the laptop and desktop. We believe there's a blurring of lines between form factors, says Nuance VP and general manager of Dragon devices Matt Revis. The mobile handset has driven a desire for speech as an interface in all form factors, including desktops and laptops.  Revis says the absence of voice-based personal assistant functionality in Windows 8 has left the door open for third-parties like his company to step in and provide a solution. Still, he acknowledges that direct OS integration has its benefits: There could be advantages to having the personal assistant functionality built into the OS, around things like command and control, Revis says. This could mean commands like  'brighten the screen,' or 'go to sleep.'"But Revis stresses that Dragon Assistant performs 80 percent of the tasks people do on their machines most often. And this includes interacting with other third-party apps for things like playing music using a music app. If Intel and Nuance find success in building voice recognition into Intels Ultrabook platforms, Microsoft may be pressured into building voice command into its OS in future iterations. The developer community may play a role, too. Says BCC Researchs Kutza: It's possible Microsoft might be using a 'wait and see' approach, evaluating the feedback it gets from developers before integrating this functionality into Windows 8.

Evolving security standards a challenge for cloud computing, expert says


ORLANDO -- Any enterprise looking to use cloud computing services will also be digging into what laws and regulations might hold in terms of security and privacy of data stored in the cloud. At the Cloud Security Alliance Congress in Orlando this week, discussion centered on two important regulatory frameworks now being put in place in Europe and the U.S.
The European Union, with its more than two dozen countries, has had a patchwork of data-privacy laws that each country created to adhere to the general directive set by the EU many years ago. But now there's a slow but steady march toward approving a single data-privacy regulation scheme for EU members.
These proposed rules published by the EU earlier this year may not become law until 2016 or later as they involve approval by the European Parliament, said Margaret Eisenhauer, an Atlanta-based attorney with expertise in data-privacy law.
Europe, especially countries such as Germany, already takes a stricter approach to data protection than the U.S., with databases holding individual's personal information having to be registered with government authorities, and rules on where exactly data can be transmitted. "European law is based on the protection of privacy as a fundamental human right," Eisenhauer said.
The benefit of the proposed EU regulation is that EU countries will, in theory, present a uniform approach instead of a patchwork of rules. The so-called "Article 29 Working Party Opinion" of proposed law specially addresses use of cloud computing, and it presents cloud providers and users with a long list of security-control requirements.
In addition, cloud providers must offer "transparency" about their operations something some are reluctant to do today, Eisenhauer said.
The proposed regulations also allude to how cloud-based computing contracts should be established. Among many requirements, "you have to state where the data will be processed," Eisenhauer said, plus where it will be accessed from. Customers have the right to "visit their data," she said, which means providers must be able to show the customer the physical and logical storage of it.
Some ideas could become the norm for Europe, such as the concept of the "right to be forgotten," which recognizes that individuals have a right not to be tracked across the Internet, which is often done through cookies today. This "privacy by default" concept means that Web browsers, for example, will likely be required to ship turned on by default to their newer "do not track" capabilities to be used in Europe. In Europe, "there are real concerns about behavioral targeting," said Eisenhauer.
Some European legal concepts suggest that even use of deep-packet inspection often a core technology used in security products today to watch for signs of malicious activities on the network could be frowned on under European law, and companies will need to be mindful of how deep-packet inspection is deployed, said Eisenhauer. Even today, use of security and information event management (SIEM) monitoring of employee network usage is something that does not easily conform to European ideas of data privacy.
The proposed EU data-privacy rules require reporting data breaches to the governments and their data-privacy authorities there as well as to the individuals impacted by it very quickly. The regulation also points to possible fines for failing to comply with the proposed regulations, fines that start with 2% of the company's annual worldwide revenue.
However, Eisenhauer adds that Europe's data-privacy regulators in government encourage direct communication about any issues that come up with cloud-service providers and their customers and are far more eager to resolve problems, not mete out punishments.
Many companies, including HP, which is a member of the CSA, are tracking these kinds of regulatory requirements from all across the world that impact the cloud.
"You will have to answer to auditors and regulatory regimes," said Andrzej Kawalec, HP's global technology officer at HP Enterprise Security Solutions. This means that there can't be "monolithic data centers" all subscribing to one mode of operation, but ones tailored to meet compliance in Europe, Asia and North America.
In Switzerland, for example, which is not part of the EU, "the Swiss think the data should remain in Switzerland," he said. But "everyone is getting a lot more stringent" on security and data protection, Kawalec said. Some ideas, such as Europe's notion that even the user's IP address represents a piece of personally identifiable information, are not necessarily the norm in the U.S.
In the U.S., there is also a significant regulatory change afoot related to cloud computing and security and it is arising out of the federal government's so-called FedRAMP program unveiled earlier this year.
FedRAMP is intended to get cloud-service providers (CSP) that serve government agencies accredited for specific security practices over the next two years. Although no CSP is yet certified, according to Chris Simpson, CEO at consultancy Bright Moon Security, who spoke on the topic at the CSA Congress this week, the goal is to get CSPs on board by assuring through third-party assessments that their cloud environments conform to specific security guidelines.
These include practices for incident response in the cloud, forensics in a highly dynamic environment, threat detection and analysis in a multi-tenant environment, and continuous monitoring for remediation, among other things. One FedRAMP idea is that service providers must be prepared to report security incidents of many types to the U.S. CERT and the government agency that might be impacted. The agency would also be reporting to US CERT as well, said Simpson.
If CSPs can't meet the FEDRAMP guidelines, they won't be able to provide services to government agencies, said Simpson. Once certified in FedRAMP though, they'll have a path to contracting for all federal agencies. But if a security incident or data breach occurs that is seen as negligence, that might be cause "to pull that authorization," Simpson concluded.
Ellen Messmer is senior editor at Network World, an IDG publication and website, where she covers news and technology trends related to information security. Twitter: MessmerE. E-mail: emessmer@nww.com.

Apple seeks standard to appease angry university net managers


ATLANTA -- Under fire from its customers in the higher education market, Apple has proposed creating a new industry standard that would fix problems with its Bonjour zero configuration networking technology that is causing scalability and security problems on campus networks.
Apple described how such a standard could be used at an Internet Engineering Task Force (IETF) meeting held in Atlanta this week. Apple and other vendors including Xirrus, Check Point and IBM support the idea of creating an IETF working group to improve network services like Apple's Bonjour and Linux Avahi, which use an existing IETF protocol called Multiicast DNS (MDNS). The new working group would be called MDNS Extensions or MDNSext.
Bonjour is Apple's marketing name for zero configuration networking, which allows a MacBook user to easily log into a local network and find an available printer. Behind the scenes, Bonjour provides automatic address assignment, looks up the host name and delivers available network services.
Bonjour uses MDNS, which transports DNS queries in a zero configuration way but only across local networks, not campus or enterprise networks. When it is deployed on large networks - particularly wired and wireless networks run by universities - Bonjour creates a flood of MDNS traffic, causing headaches for network managers.
"We targeted Bonjour at home networks, but over the last 10 years Multicast DNS - what Apple calls Bonjour - has become very popular," said Stuart Cheshire, an Apple networking engineer who created Bonjour and wrote the MDNS specifications. "Every network printer uses Bonjour. TiVo, home video recorders and cameras use it. IPads and iPhones use it, and we are starting to get a lot of demand from customers that they won't be able to print from iPads to a printer in the next building."
Cheshire admitted that Apple is responding to demands from university network managers that the company fix Bonjour and related technologies such as AirPrint for printing over Wi-Fi networks and AirPlay for streaming audio and video so they will work better over enterprise networks.
In August, the Educause Higher Ed Wireless Networking Admin Group published an open petition to Apple seeking improved support for Bonjour, AirPlay and AirPrint on large, campus networks. The petition has 750 signatures.
The petition notes that Apple represents half of all devices on university networks. It cites increasing demand among campus users for Apple TVs that use AirPlay for presentations and personal use. It also cites increasing user demand for AirPrint from devices such as iPads.
"Limitations of Apple's Apple TV, Airplay and Bonjour technologies make it very difficult to support these scenarios on our standards-based enterprise networks," the petition said.
The higher ed community has asked Apple to fix several aspects of these technologies including: making Apple TVs accessible from Apple client devices across multiple IPv4 and IPv6 subnets; improving Bonjour so that it will work in a scalable way in large enterprise wireless and wired networks; adding support for wireless encryption and authentication methods to Apple TV; and the use of enterprise Authentication, Authorization and Accounting services for Apple devices including Apple TV.
In general, university network managers want Bonjour, AirPlay and AirPrint to be scalable to thousands of devices, to work with wired and wireless networks from different vendors, to not negatively impact network traffic, to be easily manageable on an enterprise scale and to be provided at a reasonable cost.
In response to some of these concerns, Cheshire proposed to the IETF that MDNS be changed to allow for small multicast domains to be created on a large network, without losing the zero configuration and service discovery features.
Cheshire pointed out that several vendors - Xirrus, Aruba , Cisco, Aerohive and Ruckus - are selling Bonjour proxy devices to help enterprise customers by relaying multicast traffic across large networks, but that these devices are making the multicast flooding problem worse.
"The software that already exists in Apple Bonjour and Linux Avahi has some wide-area capabilities. We have some tools to build with, but we have not put it together right,'' Cheshire said. "The question is whether there is interest in the IETF to step in and do it better"
Representatives of Xirrus, Cisco and CheckPoint said they were interested in seeing this work go forward at the IETF.
'We would much rather put our development efforts into a standard protocol," said Aaron Smith, Director of Software, Applications and Services at Xirrus. "We are really heavy into the education market; nearly half of our engagements are in K-12 or higher ed. We're very interested in this kind of approach, especially if Multicast DNS would work better on Wi-Fi."
"I fully support this work," said Check Point Fellow Bob Hinden. "It's a real problem today. It's going to be worse with multiple subnets in the home."
Kerry Lynn from the IEEE outlined the requirements for a new standard that would fix MDNS
"We need to build something that's scalable, usable and deployable," Lynn said. "It needs to enable DNS-based service discovery across lots of links. It needs to work with both local and global use. And it needs to be scalable in terms of network traffic."
Thomas Narten, who works on Internet Technology and Strategy at IBM, led the discussion about creating an MDNSext working group. Narten said he expects the IETF to make progress on creating a standard fix to the Bonjour problem between now and when the IETF meets again in Orlando in March.
"There's a recognition of the problem and a willingness to work on it," Narten said. "We have to figure out how best to get to a solution. The universities are hurting; they're seeing this problem for real."
Read more about lan and wan in Network World's LAN & WAN section.