Dell Notebooks Comes With Integrated Mobile Broadband

Rodney Gedda

Dell has announced the addition of BigPond, using Telstra's Next G mobile network, as an option for integrated mobile broadband in a number of its notebooks.

From today 13 notebook models will sport the capability to access BigPond wireless broadband built-in, with plans starting at $34.95 per month on a 12-month plan.

Dell previously only offered built-in support for Vodafone mobile broadband.

Dell client computing strategist Jeff Morris said the move is in response to direct customer feedback and the company has worked with BigPond to make the wireless broadband experience "simple, easy to buy and to use".

BigPond group managing director Justin Milne said customers simply need to "fire up your new notebook, run the connection manager, pick a plan, and you're online".

Notebooks that support mobile broadband are throughout Dell's Latitude, Precision, Inspiron, Vosto, and XPS ranges.

Sphere: Related Content

RecoverGuard Provides Error-Proof Disaster Recovery Plan

Mario Apicella


In IT, change is the only constant, as hardware and software is updated almost continuously. Companies that take business continuity seriously protect themselves by creating a recovery site to run vital business processes during an emergency.

Needless to say, keeping the recovery site current is essential to business continuity, but given the constant flux of hardware and software updates, the outcome of that effort is often uncertain.

And this uncertainty is compounded by the fact that changes to the IT infrastructure are often automated, whereas replicating those updates to the DR (disaster recovery) site remains a manual, error-prone activity.

An overlooked change could cripple your business in the event of a disaster. Think, for example, how damaging it would be if an important database was moved to a different volume to improve performance but that change was never replicated at the recovery site.

Is there a better way other than zealous attention to details to keep a DR plan effective? According to startup Continuity Software, its recently announced RecoverGuard 2.0 is the answer.

Think of RecoverGuard as a watchdog that can automatically compare the details of two IT infrastructures, then finds and reports their differences. Not only does RecoverGuard continuously monitor the two sites, but it also automatically creates a problem ticket when discrepancies arise.

It's interesting to note that RecoverGuard has a bottom-up, data-comes-first discovery process that initially identifies the storage objects of a site, then seeks out the hosts that owns them.

During discovery, RecoverGuard builds an accurate topology map of the datacenter that admins can use to better understand and solve problem tickets.

According to Continuity Software, RecoverGuard 2.0 brings some interesting improvements over previous versions, including a more efficient and faster discovery process, and a Dashboard that empowers nontechies to manage this critical business activity.

How intrusive is RecoverGuard? Not very, according to the vendor. In fact, it sits on a dedicated Windows machine and doesn't require you to install agents on your servers. Understandably, you'll have to provide the software with ample authentication credentials, just as you give your security guards keys to open every door in the building.

I liked just about everything I heard and saw during my briefing and demonstration with Continuity Software, including its assessment challenge -- a sort of gauntlet thrown at your current DR procedure.

It goes like this: Continuity Software volunteers to perform a risk assessment that won't cost you anything if no damaging difference is found between your primary and recovery sites.

What happens if a significant inconsistency is found? Well, then, you pay US$15,000 for the assessment, plus a yearly license fee of US$2,000 per server. Are you confident enough to take that challenge?

Sphere: Related Content

Solid-State Drives In The Market Soon

John Brandon

For laptop owners, flash-memory drives boost battery life and performance while making notebooks lighter and more bearable for frequent business travelers. In the data center, benefits include higher reliability than their magnetic counterparts, lower cooling requirements and better performance for applications that require random access such as e-mail servers.

So far, the biggest barriers to adopting solid-state drives (SSD) in the data center have been price and capacity. Hard disk drives (HDD) are much less expensive and hold much more information. For example, a server-based HDD costs just US$1 to US$2 per gigabyte, while SSD costs from US$15 to US$90 per gigabyte, according to IDC.

Capacities are just as disparate. The Samsung SSD drive only holds 64GB, although the company plans to release a new 128GB version next year. Meanwhile, Hitachi America makes a 1TB HDD that's energy efficient and priced at US$399 for mass deployment in servers.

Enterprise Strategy Group analyst Mark D. Peters explains that solid-state technology has been on the radar for years, but has not been a "slam-dunk" in terms of price and performance for corporate managers. That's about to change, he says, because the IOPS (input/output operations per second) benefits to SSDs are too impressive to ignore. Advantages include how SSD has no moving parts, lasts longer, runs faster and is more energy efficient than an HDD.

And prices are falling fast. Right now, the industry trend is a 40% to 50% drop in SSD pricing per year, according to Samsung.

The arrival of hybrid drives such as Samsung's ReadyDrives -- which use both SSD and HDD technology -- and SSD-only servers "suggests the time for SSD as a genuine -- and growing -- viable option is getting closer," says Peters. He was referring to the recent IBM announcement about BladeCenter servers that use a SSD.

"Price erosion, coupled with increased capacity points, will make SSDs an increasingly attractive alternative to HDDs" in data centers, agrees Jeff Janukowicz, an analyst at IDC.

Two examples of how SSDs solve persistent throughput problems for high-performance computing shows how SSD technology may make new inroads in corporations in 2008, some industry watchers believe.

Solid-state at the Stanford Linear Accelerator Center

At this research center, SSD is being used for some of the most data-intensive work going on today. The Stanford Linear Accelerator Center (SLAC) uses particle accelerators to study questions, including where antimatter went in the early universe and what role neurexin and neuroligin proteins play in autism.

The amount of data is immense -- in the petabytes -- and the lab uses a cluster of 5,000 processor cores. Despite that, the discrete chunks of data that are requested and analyzed by several hundred researchers are highly granular -- usually just 100 to 3,000 bytes of information. At the same time, scientists tend to perform thousands of data requests, accessing a few million chunks of data per second.

Richard Mount, SLAC's director of computing, explains that the response time for these researchers' data requests is limited not by the number of processors or by the amount of network bandwidth, but rather by disk access time. "Flash memory is over a thousand times faster than disk" drive technology," says Mount. "Hard disks are limited to around 2,000 sparse or random accesses per second. When accessing thousand-byte chunks, this means that a disk can use only 1/50th of a gigabit-per-second network link and less than 1/100,000th of a typical computer center network switch capacity."

This limitation has translated into the need to make what the lab calls "skim data sets." In other words, preassembled collections of related data that at least one researcher has already requested. "There is no waiting for skim data sets that already exist, but if somebody wants one that does not already exist, then they normally have to wait for a skim production cycle that takes place once every four to six months," Mount says.

To help researchers receive data in a more ad hoc manner, flash storage may be just the thing. "We have no religious attachment to flash, but we can construct flash-based storage at a reasonable cost and around 25ms latency, and we are doing so."

SLAC has developed its own SSD-based system that is in the final debugging stages, Mount explains. "The first version of this will provide about 2TB of storage, but we can easily grow this to 5 or 10TB just by buying flash chips," though he reckons the scalability will require "more serious expenditure." At the 2TB level, it will serve as a test and development system only.

Eventually, the goal is to use SSD technology as a cache for all particle accelerator research, which will allow scientists to access data at any time from any data store. "SSDs help the entire system run more efficiently by ensuring the I/O capability is in balance with the rest of the application system," adds IDC's Janukowicz. "The characteristics of flash-based SSDs make them a well-suited alternative for high-IOPS applications that are read intensive. SSDs have no rotational latency and have high random-read performance. Thus, with SSDs the time to access the data is consistent and very small regardless of where on the device the data is held."

Considering SSD at the Pacific Northwest National Laboratory

At the Pacific Northwest National Laboratory (PNNL) in Washington, solid-state technology could help alleviate a supercomputer bottleneck. At the lab, researchers run tests that sustain a write speed of 80Gbit/sec. and a read speed of 136Gbit/sec. Yet, one or two slow hard disk drives running at one quarter the speed of other disks causes performance to degrade quickly.

"Solid-state devices such as flash drives can use a RAID striping technique to achieve high streaming bandwidth -- just like [hard] disk drives -- while also maintaining very low latency for random access," says Robert Farber, a senior researcher at PNNL. "This is a very exciting combination."

The lab has not moved to solid-state technology yet. But Farber says the real debate is whether low-latency access for "seek-limited applications" -- in other words, many requests for small amounts of data -- can alleviate the pressure of computing bandwidth. It is not solely a price-per-gigabyte debate. "It remains to be seen how much of a price premium consumers will tolerate before robustness, power, storage capacity and physical space differences cause a mass departure from magnetic media," Farber says.

At the PNNL, the latency goal for its last supercomputer was 25Mbit/sec., per gigaflop of peak rate floating-point performance. This is mostly to be able to handle the data-intensive nature of the NWChem scientific software calculations running. The lab's new environmental molecular sciences facility contains a new supercomputer with a theoretical peak floating point performance of 163 teraflops. And, like at the Stanford lab, disk speed is a critical part of the equation, so solid-state is the forerunner in solving the bottleneck.

One breakthrough Farber expects in the not-too-distant future: Operating systems will change their memory hierarchy to directly access SSD, turning the technology into a hard drive replacement for mass storage.

Complementary, not replacement tech for most users

One question that remains: When will SSD really impact the corporate world? Some say SSD in the data center is just on the horizon, since laptops such as the Dell XPS M1330 uses a Samsung 64GB SSD. Alienware also offers a 64GB option in some of its desktop computers. And SSD is applicable across the commercial landscape; while researchers need the speed to study proteins, retailers may need or want faster POS transactions.

One company to watch in this space: Violin Memory. The company's Terabyte-Scale Memory Appliance provides over 1Gbit/sec. access for sequential and random-access. SLAC's Mount says he tested a DRAM-based prototype appliance from Violin, and that its upcoming flash-based system "seems a good match for our applications."

A Violin spokesman explains that the two key bottlenecks in corporate computing are network speeds and IOPS for storage systems. Today, disks run at about 100Mbit/sec. for sequential operations, but only 1Mbit/sec. for random 4k blocks, he says.

"In some cases, there are minimal capacity requirements which are well suited for SSDs," Janukowicz adds. "Also, in high-performance applications, the IOPS metrics can favor SSDs over HDDs." However, even with all those benefits, he says that "IDC does not see SSDs completely replacing HDDs in servers. SSDs do offer performance advantages and are a 'green' solution. However, there are many applications that require the capacity provided by HDDs."

Enterprise Strategy Group's Peters says that throughput requirements will lead to a gradual shift away from hard disk drives to solid-state technology, but it will take time in the corporate world. "Moving wholeheartedly from one technology to another is a rare thing within data centers," he says.

John Brandon worked in IT management for 10 years before starting a full-time writing career. He can be reached at jbrandonbb@gmail.com.

Sphere: Related Content

MIT Finally Completes Its OpenCourseWare Project.

John Cox

MIT this week announced an important digital achievement: the completion of its pioneering OpenCourseWare project. And everyone involved seems quite happy with being unsure about why exactly it's important.

The achievement is digitizing all the classroom materials for all of MIT's 1,800 academic courses, putting them online, and inviting anyone and everyone to do whatever they want with that information. It's called the OCW project, and it's spawning a global movement to make what had been jealously guarded education resources accessible to educators and learners everywhere.

You can find the outline of a course in fundamentals in data networking, with a syllabus and lecture notes. There's a PowerPoint presentation from 2006 on "Trends in RFID Sensing".

Proposed in 2000 by a faculty committee, announced in 2001, and launched in 2002, OCW has received US$29 million in funding, US$5 million from MIT, the rest from foundations and contributors. One key backer, the William and Flora Hewlett Foundation, has decided on investing another US$100 million over five years in various open education projects largely because of its experience with OCW, according to Marshall Smith, director of the foundation's education program.

MIT has taken a step in doing something more with OCW. As part of Wednesday's celebration on the MIT campus in Massachusetts, University President Susan Hockfield announced a new portal for OCW, one designed specifically for high school teachers and students. Dubbed "Highlights for High School," the portal's home page selectively targets MIT's introductory science, engineering, technology and math courses, with lecturer's notes, reading lists, exams and other classroom information. The OCW resources, including video-taped labs, simulations, assignments and other hands-on material, have been categorized to match up with the requirements of high school Advanced Placement studies.

It's that "letting them do whatever they want" part that creates the uncertainty about why OCW is important. The data on usage are impressive. In the five years since the launch of OCW, with a 50-course pilot site, an estimated 35 million individuals have logged in. About 15% are educators, 30% are students, and the rest are what MIT calls "self learners" with no identifiable education affiliation, says Steve Carson, OCW's external relations director.

The recently formed OpenCourseWare Consortium has 160 member institutions, creating and sharing their own sites, on the MIT model. Something like 5,000 non-MIT courses are now available globally, some but not all using material from the OCW Web site.

Yet, one of the most striking statistics is from a completely unexpected source: iTunes, Apple's Web site for music and videos. MIT President Hockfield said she was told in September by her daughter to check out the iTunes list of most-popular videos. To her astonishment, Hockfield found two OCW videos in the top-10 listing. "No. 3 was 'classical mechanics,' she said. "No. 7 was 'differential equations.' Go figure."

"This expresses, to me, the hunger in this world for learning, and for good learning materials," she told her audience.

A distinguished group of speakers and panelists at the MIT event all agreed that OCW represents...well, something.

"We're unlocking a treasure trove of materials," said Steve Lerman, MIT's dean for graduate students, and chairman of the OCW Faculty Advisory Committee.

OCW's resources will factor large in plans by the government of India to create a massive expansion of educational resources, according to Sam Pitroda, chairman of the government's Knowledge Commission, which is charged with making specific recommendations on how to spend the new US$65 billion the government will invest in education over the next five years. The nation has over a half-billion people younger than 25, Pitroda says. Just one of a series of almost unimaginable goals is to increase the number of universities from 350 today to 1,500 in five years, he said.

Pitroda said the scale of such goals requires questioning basic assumptions about what education is and how it is accomplished. "We don't have enough resources to train teachers and build an entire [traditional] infrastructure to support them," he said. Hence, the commission's interest in open projects like OCW, which hold the promise of a massive transfer not only of knowledge but of teaching approaches and learning structures that can be adapted to local requirements and cultures.

"Given this expansion, OCW plays a key role in these emerging experiments" in education, Pitroda said.

Former Xerox Chief Scientist John Seely Brown, sounding what for him is a recurring theme, said Web technologies in education are creating a new generation of tinkerers, who tinker with content online rather than nuts and bolts. This is the domain of mashups, of combining existing content from various sources and media to create new, often more complex creations, often in the context of a community of peers who share a common passion.

"Maybe the next stage for OCW is shifting from [a focus on] content to actions on or with the content," he said. "We have the ability to bring back tinkering, which is the basis of our intuition. We get our intuitions from playing around with stuff."

Their musing prompted further musings from the audience.

Someone wondered if the new technologies both inspiring and enabling OCW and other projects have rewired the brains of the next generation, so that entirely novel ways of teaching and learning are now needed. Another asked if these technologies were democratizing learning, didn't that call into question the classic idea of the university as a "certifier" by its degree programs that a student has acquired a certain degree of knowledge. Still another wondered how OCW could be augmented by faculty from around the world while balancing a need for maintaining some criteria of excellence.

These and many other questions will have to be addressed as part of a developing global conversation about the "meta university," suggested Charles Vest, MIT's former president and an early and enthusiastic backer of OCW. This concept is an attempt to blend what Vest described as the "deeply human activities" of teaching and learning, with advances in information technology that are making possible new tools for those activities: vast digital archives, open digital publications such as the Public Library of Science, projects like the Sakai open source learning management system, and projects like MIT's iLabs, which lets students around the world use the Internet to access automated lab equipment, run automated experiments, and analyze and share data.

"The emotion I feel right now is humility," said Hal Abelson, professor of computer science and engineering at MIT, and founding director of Creative Commons, a non-profit that offers free tools for content creators to mark their online creative work with the freedoms and permissions they want the work to carry. "What OCW has led us to see is what we're in something like 'Education 1.0' What comes next? We're imagining the future."

Sphere: Related Content

Cisco Confirms Its VOIP Phones Spies On Remote Calls

Linda Leung

Cisco confirmed it is possible to eavesdrop on remote conversations using Cisco VoIP phones. In its security response, Cisco says: "an attacker with valid Extension Mobility authentication credentials could cause a Cisco Unified IP Phone configured to use the Extension Mobility feature to transmit or receive a Real-Time Transport Protocol (RTP) audio stream."

Cisco adds that Extension Mobility authentication credentials are not tied to individual IP phones and that "any Extension Mobility account configured on an IP phone's Cisco Unified Communications Manager/CallManager (CUCM) server can be used to perform an eavesdropping attack."

The technique was described by Telindus researcher Joffrey Czarny at HACK.LU 2007 in Luxembourg in October.

Cisco has published some workarounds to this problem in its security response.

Also in October, two security experts at hacker conference ToorCon9 in San Diego hacked into their hotel's corporate network using a Cisco VoIP phone.

The hackers, John Kindervag and Jason Ostrom said they were able to access the hotel's financial and corporate network and recorded other phone calls, according to a blog on Wired.com.

The hackers used penetration tests propounded by a tool called VoIP Hopper, which mimics the Cisco data packets sent at three minute intervals and then trades a new Ethernet interface, getting the PC - which the hackers switched in place of the hotel phone - into the network running the VoIP, according to the blog post.

The Avaya configuration is superior to Cisco, according to the hackers, because you have to send requests beyond a sniffer. Although it can be breached the same way, by replacing the phone with a PC.

Sphere: Related Content