Tuesday, October 9, 2012

Facebook proposes revised settlement in Sponsored Stories lawsuit


Facebook has proposed a revised settlement in a lawsuit in which it was alleged to have used the names and likeness of the plaintiffs without their prior consent in "Sponsored Stories" advertisements shown to their online friends on the social networking website.
The company has agreed in a court filing on Saturday to pay Facebook users in the U.S. who appeared in Sponsored Stories up to $10 each to be paid from a $20 million settlement fund, amending itsearlier proposed settlement which aimed to pay $10 million to activist organizations and charities as cy pres fee as direct payment to all members of the class was not feasible. Another about $10 million was earmarked for attorneys' fees and expenses.
Facebook has also promised greater user control including a tool that will enable users to view going forward the subset of their interactions and other content on Facebook that may have been displayed in Sponsored Stories, and the ability to prevent further displays of these Sponsored Stories.
The motion for preliminary approval of the earlier proposed settlement was rejected in August by the U.S. District Court for the Northern District of California, San Francisco division with the Judge Richard Seeborg ruling that there were sufficient questions regarding the proposed settlement. The question will remain as to whether $10 million in cy pres recovery is fair, adequate, and reasonable, the Judge wrote in his order.
"Although it is not a precise science, plaintiffs must show that the cy pres payment represents a reasonable settlement of past damages claims, and that it was not merely plucked from thin air, or wholly inconsequential to them, given their focus on prospective injunctive relief," Judge Seeborg added.
The settlement relates to a class action lawsuit filed in U.S. District Court in California in 2011 by Angel Fraley and others in which they alleged that Sponsored Stories constitute "a new form of advertising which drafted millions of (Facebook members) as unpaid and unknowing spokepersons for various products," for which they were entitled to compensation under California law.
The new proposed settlement, which also needs Seeborg's approval, covers nearly 125 million users in the class, which could see each member getting a few cents on a pro-rata basis if all file claims.
Some funds will still go to charity if there are any left after paying for users' claims, attorneys fees and other expenses are met. The entire amount could still go to charity if it is found economically unfeasible to pay all class action members without exceeding the settlement fund, according to the revised proposed settlement. Facebook can also now oppose petitions for fees and expenses by plaintiffs' counsel.
Facebook has argued in the lawsuit that the users had agreed to its terms of use as a condition for using its website, and agreed to the possible use of their name and profile picture in association with commercial, sponsored or related content, before Sponsored Stories was launched. The "clear, express consent posed an insurmountable hurdle for Class Members" who had the burden to prove that the social networking site did not have consent to display their names and profile pictures, it said in the filing on Saturday.
John Ribeiro covers outsourcing and general technology breaking news from India for The IDG News Service. Follow John on Twitter at @Johnribeiro. John's e-mail address is john_ribeiro@idg.com

iPhone 5 vs. the HTC One X+


The iPhone 5, for better or for worse, is inevitably the one that all new smartphone releases will be compared to, at least for the next few months. Despite slightly underwhelming initial reviews, it's an inarguably impressive device that reaches, and in some areas exceeds, its Android-powered rivals.
The latest major device announced in that ecosystem, the HTC One X+, comes from an OEM confronted with a similar need to make up ground on competitors like the Samsung Galaxy S III -- and it's a similarly impressive device. Here's a look at how it stacks up against Apple's latest offering.
INTERNAL HARDWARE
PC Magazine's well-publicized benchmarking results recently prompted the iPhone 5 to be dubbed the fastest smartphone in the world, blowing away even the Galaxy S III and Droid Razr M in that publication's testing.
While benchmarks should generally be taken with a grain of salt, the size of the gap between the iPhone 5 and its rivals means that it's unlikely to be a simple fluke -- and its powerful PowerVR SGX 543MP3 graphics hardware likely has a lot to do with this discrepancy.
Although the 1.7GHz quad-core CPU should give the One X+ an advantage over the iPhone 5's 1.3GHz dual-core, the HTC device's ULP GeForce graphics processor is likely no match for the iPhone's three-core GPU.
ADVANTAGE: iPhone 5
DISPLAY HARDWARE
The iPhone 5's slightly smaller 4-inch, 1136x640 display has better pixel density than the One X+'s larger 4.7-inch, 1280x720 screen, though HTC's much-vaunted Super LCD2 technology could compensate for that disadvantage. (Or, hey, you might just want a physically bigger screen anyway.)
While this isn't to say that the iPhone 5's own Retina display isn't very good as well, I have to give the nod to the One X+ here, on the assumption that production models have a screen at least as good as its One X predecessor.
ADVANTAGE: One X+
CAMERA
There's almost nothing separating the two devices in terms of their available camera options -- both pack an impressive 8-megapixel rear-facing shooter, with 1080p video recording capability and a host of bells and whistles, as well as front-facing 720p-capable options. (The One X+ shades the megapixel count at 1.6 to 1.2, but that's unlikely to make much of a practical difference.)
ADVANTAGE: Push
OPERATING SYSTEM
Although this can be considered a matter of preference, I've been consistently impressed with Android 4.1 Jelly Bean since its release, and that's what the One X+ carries. However, its version will use HTC's Sense overlay, which could dilute that Jelly Bean goodness. (It should be noted, though, that Sense is the least maligned of the OEM Android skins. Ever heard people talk about how much they hate Motoblur? It's a little intense.)
That said, iOS 6 is probably not the disaster that it's cracked up to be -- yes, Apple has had major problems and gotten a lot of egg on its face over the Maps fiasco. There's plenty that iOS 6 does right, particularly if you're into social media in a big way.
In the final estimation, however, it's tough to argue with Jelly Bean as the winner here. Unless the Sense experience on Android 4.1 is startlingly and uncharacteristically terrible, this one goes to the One X+.
ADVANTAGE: One X+
BATTERY
On paper, it seems like HTC's latest should win this easily -- on a strict count of milliamp hours, the One X+ leads the iPhone 5 by 2100 to 1400.
However, the Apple device's test results appear to give it a battery life well beyond a simple measurement of mAh, and there are no public benchmarks for the One X+'s battery as yet. Because of this ...
ADVANTAGE: Push
CARRIERS
The One X+ is an impressive-looking device, arguably even more so than the iPhone 5, but it has a major drawback -- it's apparently going to be available only on AT&T, meaning Sprint subscribers like myself are out of luck. The iPhone 5 -- formerly an AT&T exclusive itself -- is now available on AT&T, Sprint and Verizon, making it available to a much larger part of the U.S. populace.
ADVANTAGE: iPhone 5
FINAL RESULT: Push
I know, I know -- not even soccer fans like a draw. But based on available information, there's no real way to say definitively that either the HTC One X+ or the iPhone 5 is a demonstrably "better" product. As ever, it boils down to what you, the user, need your smartphone to do -- if you're after bleeding-edge graphics performance or you don't want to use AT&T, the iPhone 5 is probably the device for you. If you want a big, impressive screen or the excellent Android 4.1, go with the One X+.
Email Jon Gold at jgold@nww.com and follow him on Twitter at @NWWJonGold.
Read more about anti-malware in Network World's Anti-malware section.

Good news for job hunters in Android, Linux, and open source


Job candidates with tech skills in general and Linux skills in particular tend to face better-than-average prospects in today's otherwise gloomy hiring marketplace, but in the past few days the outlook appears to have gotten even brighter.
Following hard on the heels of news from a few weeks ago that DevOps is a growing trend, three separate reports in recent days suggest a particularly rosy future for those with skills in Android, Linux, and open source software.
"Top Five Tech Jobs Point to Opportunity for Linux Pro's" was the Friday headline at Linux.com, for example, in which author Jennifer Cloer reports on staffing and consulting firm Robert Half International's new Salary Guide listing the top five most lucrative tech jobs for 2013.
'A fundamental understanding of Linux'
Mobile app developers, wireless network engineers, network engineers, data modelers, and portal administrators are the most promising jobs for salaries next year, the publication predicts, and "the most important thing they have in common is that they each require a fundamental understanding of Linux," Cloer notes.
Also on Friday there was the headline, "Open source hobbyists now in high demand" over at ITworld, in which author Brian Proffitt mulls open source operating system Contiki.
Then, on Monday came a report from IT careers site Dice naming several key skills being sought at all-time, record-high levels.
'It's time to take advantage'
Software development and quality assurance lead the list, but among those next in line are Python, Ruby on Rails, Android, and JBoss.
"The need for open source programming language skills that power a huge number of Web applications and technologies is evident in requests for Python and Ruby," Dice explains. "Both have hit all-time highs in six of the 10 months in 2012."
As for Linux-based Android, job postings seeking skills in that area are up a full 33 percent over last year, Dice reports.
In short, "technology professionals with these skills and expertise are being sought like never before on Dice," the company concludes. "It's time to take advantage."

Wednesday, October 3, 2012

IBM drops Power7+ in high-end Unix servers


IBM has started to roll out a new processor for its Power family of servers, a staggered affair that will start with higher-end systems and eventually reach the midrange and low-end boxes.
The new Power7+ chip has a higher clock speed than its predecessor, at up to 4.4Ghz, but the biggest change is in the Level 3 on-chip memory cache, which IBM has expanded to a sizeable 80MB, from 32MB on the Power7.
The bigger cache means more of the data being used for calculations -- the "working set" -- can be stored on the chip close to the CPU cores, which helps to speed operations. With a smaller cache, data has to be fetched more frequently from main memory.
The higher clock speed and larger cache will give a boost in performance for databases and Java applications, according to Satya Sharma , CTO for IBM's Power Systems business and an IBM "fellow," or one of its top engineers. "We can improve performance for some Java applications by up to 40 percent compared to the Power7," Sharma said.
IBM's top brass are due to discuss its systems business during a customer webcast at 11 am Eastern Wednesday. They may also talk about a new, high-end storage system called the DS8870 and an update to IBM's DB2 Analytics Accelerator, which are also being announced.
Across the country at about the same time, Oracle systems chief John Fowler is due to give a keynote speech at Oracle OpenWorld in San Francisco, and Hewlett-Packard is holding its financial analyst day, where it's sure to give an update on its systems strategy.
None of the vendors have a lot to cheer about, at least when it comes to Unix: Unix systems revenue dropped 20 percent in the June quarter, to $2.3 billion, according to recent figures from IDC. But at least IBM's revenue declined only 10 percent, and it managed to gain 6.1 points of market share, IDC said.
The Power7+ is being offered now for the Power 770 and 780 systems, which sit near the high-end of the Power line-up. It will come eventually to lower-end systems as well, like the 740 and 750, but IBM isn't saying yet when that will be.
IBM's most powerful Unix machine, the Power 795, won't get the new chip at all, Sharma said. Customers that buy such high-end systems generally prefer stability to incremental upgrades, he said, adding that IBM took the same tack when it introduced the Power 6+.
The 795 will get at least one technical boost, however. IBM is introducing a new memory module with twice the density, so the maximum configuration for the 795 increases from 8TB of main memory to 16TB. The 770 and 780 also get the denser DIMMs.
IBM is also introducing a new memory compression accelerator that can "make a 32GB system look like a 48GB or 64GB system," Sharma said. That can help reduce memory costs for customers, but there's a trade-off in increased latency as data is decompressed for use.
With the Power7+, IBM has also doubled the number of virtual machines customers can run on each processor core, to 10 VMs. While customers might not want that many virtual machines for production use, developers can use them for jobs like compiling code, Sharma said.
Another feature that's been discussed with the Power7+ -- the ability to put two processors in one socket -- also isn't available yet. The DCM, or Dual Chip Module, effectively increases the operations per second that customers get from each socket, with the trade-off that the cores run at lower clock speeds.
It's not being offered for the high-end Power systems machines, however. "It's partly that there is a little bit more work to do, and partly it's the class of systems where we want to use that capability," Sharma said. He wouldn't give details but implied the technology is destined for midrange or lower-end systems like the Power 740 and 750.
IBM offers a sort of "compute on demand" scheme for its Power systems, through which customers can pay to activate additional processor cores for a few weeks or months, such as during the holiday shopping season, then disable again them afterwards.
It's running a "special offer" for customers who buy a new 780 or 795 system. For each processor that ships with the server, they get 15 days of additional processing for free. So if a customer buys a Power 780 system with 16 cores enabled, they also get 15 days when they can turn on an additional 16 cores.
It's also introducing a concept called Power System Pools, which lets customers use those 15 days -- and any other compute days they purchase -- across up to 10 different 780 or 795 servers. Essentially it gives customers more flexibility in how they allocate their processor resources, and, IBM hopes, makes them a bit more likely to choose IBM.
IBM doesn't usually publish prices for such high-end systems but the faster chips will come in at about the same price point as their predecessors, Sharma said. It will offer slightly better pricing on the Power 780 because it would prefer more customers bought its higher-end systems, he said.
James Niccolai covers data centers and general technology news for IDG News Service. Follow James on Twitter at @jniccolai. James's e-mail address is james_niccolai@idg.com

Web pages load 9% faster over LTE on Galaxy S III than iPhone 5


The Samsung Galaxy S III loads Web pages 9% faster over LTE wireless than Apple's new iPhone 5, according to tests by Strangeloop Networks, a vendor of network optimization software.
The report was based on tests in July and September of six different devices. The tests measured page load times from 200 e-commerce sites.
Strangeloop also found the iPad 2 loaded Web pages 22% faster than the Samsung Galaxy tablet over a 3G wireless network.
In a separate comparison, Strangeloop found that the average home page took 11.8 seconds to load on a Galaxy S smartphone and 11.5 seconds on an iPhone 4 over 3G, making both 40% slower than the average desktop load time.
When LTE was compared to 3G, Strangloop found that LTE was 27% faster. While most carriers claim that LTE is 10 times faster than 3G, Strangeloop's tests found the difference to be far less.
The average LTE speed for loading a Web page was 8.5 seconds compared with 11.7 seconds for 3G.
"Although LTE networks have improved mobile performance, pages are still far too slow," Strangeloop CEO Jonathan Bixby said.
Most mobile shoppers want a page to load in four seconds or less, according to surveys, he said.
Strangeloop didn't elaborate on the test methodology.
Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is mhamblen@computerworld.com.
Read more about smartphones in Computerworld's Smartphones Topic Center.

Ellison hawks Oracle's cloud stack, calls out Salesforce.com


Oracle CEO Larry Ellison made a vigorous sales pitch on Tuesday for his company's next-generation Fusion Applications and underlying technology platform, saying they constitute a more modern approach to cloud-based software than offerings from rivals like Salesforce.com.
While Ellison didn't provide much news during his talk at OpenWorld in San Francisco, his remarks served to crystallize Oracle's market message for SaaS (software as a service), PaaS (platform as a service) and IaaS (infrastructure as a service).
Ellison began with a boastful claim to Oracle's place in industry.
"Oracle has more SaaS applications than any other vendor," covering sales, human resources, ERP (enterprise resource planning) and more, he said. "Everything you need, top to bottom, to run your enterprise in the cloud."
He noted that "every time you acquire a SaaS application you also acquire the underlying technology."
He went on to describe Oracle's platform, which includes Oracle's database, application server, OS, servers and storage. "It all comes together. It all has to be there for the cloud to work."
Ellison gave an update on the progress of Fusion Applications, which became generally available last year. Some 400 customers are licensing the software, with about 100 having gone live.
Out of those, about 40 percent are using Fusion CRM (customer relationship management) and another roughly 40 percent are running HCM (human capital management), according to a slide shown during Ellison's presentation. The rest are running Fusion ERP modules.
Two-thirds overall are deploying Fusion in Oracle's cloud, Ellison added.
Many customers who now prefer to run applications either on-premises or via dedicated hosting are going to adopt Oracle's Private Cloud, another offering announced this week that essentially duplicates Oracle's public cloud behind a customer's firewall.
But in any event, "you can make one decision and change your mind" since Fusion Applications can be moved between deployment models without changes, Ellison said.
"We're the only one who gives you a choice of deployment," Ellison said, noting that Salesforce.com customers don't have the option to move its cloud applications into their own data center.
Customers could do initial testing on the Oracle public cloud and then do a production deployment on a private cloud, he added.
While some may see Oracle's full-stack approach to the cloud as the ultimate vendor lock-in, Ellison said all of Oracle's technology is based on open standards such as Java and Linux.
"Standards are still important," he said. "Just because we're in the cloud we don't forget everything we've learned over the past 20 years of computing."
He also sought to dispel any notion that cloud services amount to magic for customers.
"Just because the application is in the cloud doesn't mean you don't have to do any work," he said. "You're still going to have to interconnect these applications. Therefore we provide a platform to do it."
While Fusion Applications' SOA (service-oriented architecture) makes it easier for customers to make these connections, there's more to it than flipping a switch, he said. "Otherwise Deloitte wouldn't have a big cloud practice. Accenture wouldn't have a big cloud practice. They must be doing something."
Ellison repeated statements he made earlier in the week regarding a new multitenancy feature in Oracle's upcoming 12c database. The feature will allow a number of "pluggable" databases to reside in a container.
This approach is superior to the form of multitenancy used by most SaaS vendors, according to Ellison.
"We think you should not commingle two customers' data in the same database," he said. "You can still share hardware, have shared resources and operate efficiently. We just don't think you should write an application with multitenancy at the application layer."
NetSuite and Salesforce.com pioneered cloud software, being formed in the late 1990s, but "that was a while ago," Ellison said. "They built multitenacy into the application layer because they had no choice."
Fusion, in contrast, is "extremely modern," Ellison said.
Ellison wrapped up his speech with a discussion of Oracle's Social Relationship Management product family.
"There's a lot more data out there than there used to be, and all that data properly processed will give you business insights about your customers, business insights about your products," he said.
Ellison described how Oracle's social technologies could detect customer discontent or confusion on a social network and then help companies quickly respond to specific customers. To improve its hand in this area, Oracle has made a number of acquisitions, including Collective Intellect and Involver.
In a demonstration, Ellison showed how a marketing manager for Lexus could use a group of Oracle technologies to plow through nearly 5 billion Twitter messages and determine which Olympic athlete would be the best person to represent the automaker in a campaign, based on audience interest.
OpenWorld continues through Thursday in San Francisco.
Chris Kanaracus covers enterprise software and general technology breaking news for The IDG News Service. Chris' email address is Chris_Kanaracus@idg.com

IETF starts work on next-generation HTTP standard


With an eye towards updating the World Wide Web to better accommodate complex and bandwidth-hungry applications, the Internet Engineering Task Force has started work on the next generation of HTTP (Hypertext Transfer Protocol), the underlying protocol for the Web.
"It's official: We're working on HTTP/2.0," wrote IETF Hypertext Transfer Protocol working group chair Mark Nottingham, in a Twitter message late Tuesday.
The group will use the IETF standard SPDY protocol as the basis for the updated protocol. Engineers at Google developed SPDY as a way to hasten the delivery of Web content over the Internet.
Nottingham officially announced the work following the recharter of the Hypertext Transfer Protocol working group by the IESG (Internet Engineering Steering Group).
Version 2.0 of HTTP will address the changing nature of how people use the Web. While the first generation of Web sites were largely simple and relatively small, static documents, the Web today is used as a platform for delivering applications and bandwidth-intensive real-time multimedia content.
The protocol will reduce latency, and streamline the process of how servers transmit content to browsers. It must be backward compatible with HTTP 1.1, as well as remain open to be extended for future uses as well.
HTTP 2.0 will continue to rely primarily on TCP (Transmission Control Protocol), though other transport mechanisms may be substituted.
Julian Reschkeof, Alexey Melnikov and Martin Thomson will serve as editors for the proposed draft, to be called draft-ietf-httpbis-http2-00. The group is scheduled to submit a proposed standard to the IESG by 2014.
The working group will also continue to refine the current version of the protocol, HTTP 1.1, which underlies the entire World Wide Web. According to estimates, there are currently about 8.45 billion Web pages.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Sandia builds massive Android network to study security, more


Government scientists have built a network of hundreds of thousands of simulated Android mobile devices that could be used for building better security on the most popular mobile devices.
By early spring 2013, the Sandia National Laboratories in California plans to make software tools available to private and government organizations that want to build their own environment for studying the behaviors of smartphone networks.
Sandia scientists have built a network of as many as 300,000 virtual handheld computing devices, but say the technology can scale up to run on supercomputer-class machines, or scale down to a workstation.
What the researchers have done is link together instances of generic Android, each running on a separate virtual machine. The network, which runs on racks of off-the-shelf, x86 desktops, can be built up into a realistic computing environment that includes a full domain name service (DNS), an Internet relay chat (IRC) server, a web server and multiple subnets.
A key component of MegaDroid is an imitation Global Positioning System (GPS) that includes simulated data of a smartphone user in an urban environment. Since Wi-Fi and Bluetooth capabilities depend on GPS data, the feature is important for studying how the two communication features could be used by cybercriminals to steal data.
Researchers also could run malware on any of the simulated devices to see how it would behave within the network.
"If you have something you're capable of running on an Android device, be it malware, an application or whatever, this platform could test it for you," Keith Vanderveen, manager of Sandia's Scalable and Secure Systems Research department, said.
Android is the favorite mobile platform of cybercriminals. Reasons include the platform's large user base and the fact that any organization can set up an app market. In August, Android accounted for almost 53% of the smartphone market, comScore said.
Besides malware, Megadroid has a much broader use. Because it can scale to the size of real-life cellular networks, it is expected to be valuable in finding ways to limit damage from network disruptions due to glitches in software or protocols, natural disasters or acts of terrorism.
In addition, the platform would be useful in studying methods for preventing unauthorized data from leaving a device, a major concern for corporations and the departments of Defense and Homeland Security.
MegaDroid will be released as an open-source project, so other researchers can modify the technology to fit their needs. While Android was chosen for the initial platform, the technology could be used in testing Apple's iOS devices.
"The platform is really designed to be flexible," David Fritz, a Sandia researcher, said.
MegaDroid is an offshoot of simulation platforms built for studying large-scale networks of Windows and Linux computers. Over the last three years, Sandia has spent a total of $3.5 million on the various projects.
The laboratory is open to working with academia and private industry on the MegaDroid project. In the 1990s, Sandia helped advise the President's Commission of Critical Infrastructure Protection, which led to its current focus on network security.Ã'Â
Read more about wireless/mobile security in CSOonline's Wireless/Mobile Security section.

Malnets lead the cyberattack pack


In politics, the future may belong to green energy and better education, but in the world of cybercrime, it looks like it increasingly belongs to malicious networks, or malnets.
That is the key finding of Blue Coat Security Lab's Mid-Year Malware Report, eleased Tuesday. The company said the number of malnets now stands at more than 1,500, an increase of 300% in the past six months, and it expects they will be, "responsible for two-thirds of all malicious cyberattacks in 2012."
Malnets are distributed infrastructures within the Internet that are built, managed and maintained by cybercriminals for the purpose of launching persistent, extended attacks on computer users. That infrastructure generally includes several thousand unique domains, servers and websites that work together to lure users to a malware payload.
They are increasingly popular, Blue Coat said, because they are so effective. In what it calls a five-stage "vicious cycle," a malnet first drives a user to malware, through any number of means, including drive-by downloads, email from trusted sources or trusted websites.
"Then the user's computer is infected with a Trojan," the report said. "Once the computer is compromised it can be used by the botnet to lure new users into the malnet by using the infected machine to send spam to email contact lists, for example."
"A compromised system can also be used to steal the victim's personal information or money, and, in some cases, can also function as a jumping-off point for attacks on neighboring machines," the report said.
Tim Van Der Horst, malware researcher at Blue Coat Systems, said this demonstrates what the report calls the "organic ... self perpetuating" nature of malnets, which is one of the things that makes them so difficult to eradicate.
"When users are infected, they become a bot in a botnet," Van Der Horst said. "They communicate with a command-and-control server, and send results to the bad guys."
In short, all the capabilities of the compromised computer are in the criminals' hands. "If the computer can do it, the bad guy can make the computer do it," Van Der Horst said. "It can steal online banking credentials or leverage the machine to launch new attacks, like sending email as you to your contacts, so they're getting it from a trusted source."
Malnets are also geographically dispersed, which means that even if they are shut down in one country, they can continue operating in others, and launch simultaneous attacks. Unlike advanced persistent threats (APT), the goal of malnets is, "not to target one million people with a single search term but instead target one million people with one million different search terms," the report said.
It targets them at what Blue Coat calls the "watering holes" of the Internet -- more than a third of the requests for web content go to search engines, but social networking and audio/video clips are also popular categories.
"According to the Cisco Visual Networking Index, by 2016 all types of video will account for 86% of global consumer traffic," the report said. "With the growth of video traffic, tried and true socially engineered attacks like fake video codecs have an opportunity to dupe users into downloading malware."
They also can change host names frequently. Shnakule, the largest malnet in the world, changed the host names of its command-and-control servers more than 56,000 times in the first nine months of the year.
In the face of such attacks, tradition, signature-based defenses are not enough, Blue Coat said, noting that one of the ways enterprises should protect themselves is with better education of their employees.
Among ways to avoid poisoned search engine results are to stay away from any that appear to be hosted in other countries, such as .IN, .RU, .TK, unless the search is related to that country; avoid results with teaser text that reads as if it was constructed by a machine; and if a result looks suspicious, click on one of the other many results that were returned, the report said.
Another simple but too-frequently ignored security practice is to apply patches and other security updates as soon as they are issued. "The availability of a patch doesn't mean that users have applied it," the report said. "The Conficker/Downandup botnet has been alive for nearly four years now, with infected systems still receiving instructions."
Van Der Horst said the most effective way to defend against malnets is not to wait for a new threat to emerge and then block it, but to identify the malnet infrastructure delivering the attacks and block them at the source. This aims to prevent new attacks before they are launched -- what the company calls Negative Day Defense.
It doesn't matter what the specific threat is, since the defense is aimed at blocking the threat delivery mechanism, he said.
Read more about malware/cybercrime in CSOonline's Malware/Cybercrime section.

Riverbed reshapes WAN optimization strategy with Juniper partnership, products


Riverbed announced the availability of new model Steelhead WAN optimization appliances and native support for VMware's ESX hypervisor on Wednesday. It also said it was partnering with Juniper to offer its WAN optimization customers an upgrade path.
The new CX7055 family of WAN optimization devices brings SSD-based storage to the table, for better performance -- anywhere from 1.6TB for the $130,000 L model to 5.8TB for the $235,000 H device. The 2U units can manage between 622Mbps (CX7055L) and 1.5Gbps (CX7055H) of WAN capacity. The CX5055 line also boasts SSD technology, trading some capacity for a lower price point -- the CX5055M, which handles 200Mbps, costs $65,000 and the 400Mbps CX5505H goes for $100,000.
However, it's the new integration of VMware software directly into the EX line of "branch-in-a-box" devices that Riverbed is particularly eager to show off.
Miles Kelly, Steelhead senior director of product marketing, says that the new "virtual edge" of the data center includes virtual infrastructure in the branch. There are three key advantages to a more centralized model of provisioning: consolidation, scalability and what he refers to as "agile" service delivery. Adding VMware's ESX hypervisor accomplishes that task.
"Bringing a virtualization layer to the branch office box allows organizations to virtualize otherwise dedicated servers in the branch, so you can take DNS, DHCP, your print functions -- these are virtualized on that Steelhead EX box," he says.
The ability to run a large proportion of branch services virtually from the data center using the WAN is a powerful simplification, and one that should prove highly attractive to growing businesses, Kelly says.
Meanwhile the new products come in the wake of a recent announcement that Juniper Networks will shift its emphasis away from its own WAN optimization solutions, partnering instead with Riverbed. Accordingly, the latter company said on Monday that it would offer a trade-in program to users of Juniper's WX and WXC series acceleration appliances, providing them with Steelhead devices instead.
Riverbed also announced new versions of its Steelhead Mobile application -- which is used to manage endpoints like laptops -- and RiOS, the software running on its WAN optimization device, adding security and quality of service improvements.
Email Jon Gold at jgold@nww.com and follow him on Twitter at @NWWJonGold.
Read more about lan and wan in Network World's LAN & WAN section.

Web pages load 9% faster over LTE on Galaxy S III than iPhone 5


The Samsung Galaxy S III loads Web pages 9% faster over LTE wireless than Apple's new iPhone 5, according to tests by Strangeloop Networks, a vendor of network optimization software.
The report was based on tests in July and September of six different devices. The tests measured page load times from 200 e-commerce sites.
Strangeloop also found the iPad 2 loaded Web pages 22% faster than the Samsung Galaxy tablet over a 3G wireless network.
In a separate comparison, Strangeloop found that the average home page took 11.8 seconds to load on a Galaxy S smartphone and 11.5 seconds on an iPhone 4 over 3G, making both 40% slower than the average desktop load time.
When LTE was compared to 3G, Strangloop found that LTE was 27% faster. While most carriers claim that LTE is 10 times faster than 3G, Strangeloop's tests found the difference to be far less.
The average LTE speed for loading a Web page was 8.5 seconds compared with 11.7 seconds for 3G.
"Although LTE networks have improved mobile performance, pages are still far too slow," Strangeloop CEO Jonathan Bixby said.
Most mobile shoppers want a page to load in four seconds or less, according to surveys, he said.
Strangeloop didn't elaborate on the test methodology.
Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is mhamblen@computerworld.com.
Read more about smartphones in Computerworld's Smartphones Topic Center.

Firefox: back in the No. 2 seat once again


PCWorld's recent Web browser showdown may have crowned Chrome the ultimate winner, but new data suggests that Google's popular contender shouldn't rest on its laurels just yet.
In fact, after a similar market-share shift in August, Chrome fell further into third place in September, buoying Mozilla's Firefox firmly back into the second-place spot it occupied until relatively recently.
In August, Chrome claimed 19.13 percent of the desktop browser market, according to market researcher Net Applications, while Firefox accounted for 20.05 percent. Still in first place was Microsoft's Internet Explorer, with 53.60 percent.
Firefox's four-year low of 19.7 percent occurred in May 2012.
Now, for September, Firefox has increased to 20.08 percent, while Chrome has dipped to 18.86 percent. Explorer, meanwhile, gained a bit, reaching 53.63 percent.
'Critical Vulnerabilities for Months'
Of course, there's no denying that browser market share data varies tremendously with the firm that collects it--among many other factors.
Coincidentally, however, a recent report from security researcher Brian Krebs suggests that users should be wary of Internet Explorer, in particular.
"In a Zero-Day World, It's Active Attacks that Matter" is the title of Krebs' recent blog post, and he concludes that, "unlike Google Chrome and Mozilla Firefox users, IE users were exposed to active attacks against unpatched, critical vulnerabilities for months at a time over the past year and a half."
In fact, "if we count just the critical zero-days, there were at least 89 non-overlapping days (about three months) between the beginning of 2011 and Sept. 2012 in which IE zero-day vulnerabilities were actively being exploited," Krebs wrote--and "that number is almost certainly conservative."
For that same time period, however, Krebs couldn't find any evidence that malicious hackers had exploited publicly disclosed vulnerabilities in Chrome or Firefox before those flaws were fixed, he added.
'A Very Sane Approach'
Krebs' analysis comes in the wake of a recent zero-day vulnerability affecting IE.
"Microsoft was relatively quick to issue a fix for its most recent IE zero-day (although there is evidence that the company knew about the vulnerability long before its first public advisory on it Sept. 17)," but "the company's 42-day delay in patching CVE-2012-1889 earlier this summer was enough for code used to exploit the flaw to be folded into the Blackhole exploit kit, by far one of the most widely used attack kits today," Krebs wrote.
His conclusion?
While browser choice can be an emotional topic, at least "temporarily switching browsers to avoid real zero-days is a very sane and worthwhile approach to staying secure online," he wrote. "Although it is true that all software has vulnerabilities, the flaws we should truly be motivated to act on are those that are actively being exploited."