CES 2009 - Intel SSD, SLC, MLC and why Intel
We talk with Robert Allshouse, Business Development manager of Intel's NAND group. He talks about Intel's SSDs which are now hitting mass production. The products are guaranteed to work from 0c-70c. They use less power and take far more of a beating than their disk counterparts. Just get ready to unload your wallet.
Rob Wray: Hi, my name’s Rob from MP3 Car. We’re here at CES 2009 at Intel’s booth, and Rob from Intel is here with us giving us a quick demo of some of their memory that’s been in the press recently. So you’ve got three different modules of memory here which are all very interesting.
So this guy here, he’s an eighty gigabyte module that sells for what? About $500 on New Egg? This guy’s about, is $1,000, and how ––
Intel’s Robert Allshouse: That’s 160 gigabyte. And this little one down here is another eighty gigabyte. It’s the same drive in a slightly smaller form factor for your more small form factor net tops and other smaller applications.
Rob Wray: Okay.
Intel’s Robert Allshouse: Using a micro static connector instead of the standard static.
Rob Wray: So how long do these last?
Intel’s Robert Allshouse: The typical life for the consumer level drives is they’re expected a five-year useful life, and that useful life is at a ten-gigabyte write per day.
Rob Wray: Right.
Intel’s Robert Allshouse: So since the technology has finite write cycles. You can read as much as you want, so it’s a read-intensive application. By continually reading maps from there, you’re not affecting the life.
Rob Wray: Right. So you were telling me a little bit before about multilevel drives. So the multilevel drives, you can write two bits of data per transistor, and the single level drives you can write one bit of data per transistor. So it sounded to me like most of our readers are really going to be interested in the multilevel, because you get the same read performance off the multilevel versus the single level, and just your write performance is degraded, and it sounds like with five years writing ten gigs a day, the MLC is going to capture everybody unless you’re trying to run a database server or something with tons of writes.
Intel’s Robert Allshouse: That’s right. The SLC drives, our thirty-two and sixty-four gigabyte drives, at near the same price points, are really focused on those sequel server guys with high write intensive applications. The five-year useful life for these drives is for most consumers plenty, and it’s actually more than enough. The difference in the writes as you were saying, it’s about seventy megabytes per second write on the MLC drive. And it’s 150 megabytes per second write on the SLC drive.
Rob Wray: Right.
Intel’s Robert Allshouse: The reads are about the same. They both flood the SATA bus at about 250 megabytes per second in reading. So you’re read performance between the consumer level and the enterprise level drive are the same.
Rob Wray: We were also talking about boot times. So boot times you said was one of the least impressive benefits out of flash, but a lot of people that we work with decide to resume from hibernate, and since that’s just a read application we should get a pretty good boost and resume from hibernate, but not boost – booting.
Intel’s Robert Allshouse: Yeah, you definitely – I mean, you see definite improvements in your boost. You know, five, ten seconds. But a lot of what’s happening, if you go look at a boot demo side by side, you’ll see the IO light is not flashing as much because it’s going so much faster, but you’re still waiting. And there are a lot of other things that happen during boot besides just IO.
I’m less impressed by that as much as something like hibernate, or really application load performance. Because application load where you’re random IO performance shows the huge difference, and you’ll see a demo on that where you can see you know, files transferring and applications loading immediately in real time.
Rob Wray: Well, let’s get into that demo.
Intel’s Robert Allshouse: Alright. So the primary purpose of this demo is to show how while file transfers are happening in large applications, you don’t slow down the rest of your system. So right now we’re copying a 680-meg set of files, at the same time we’re going to open up Picasso, look at six large pictures, create a collage, and then choose another IO-intensive application like opening up iTunes, which then looks in your folders, sees if there are new songs to look for, all while that’s happening. And this takes about thirty seconds before the collage is done, and playing.
Rob Wray: So the CES show floor is loaded with tons of vendors, probably hundreds of vendors, trying to sell you memory for your laptop. What’s the difference between stuff that would go in an inexpensive laptop or it’s floating all around the halls versus what we see here?
Intel’s Robert Allshouse: So there definitely is a big difference, and they’re all based on the same NAN technology with the exception of MLC and SLC, but really underlying it is NAN. But what you do for say additional camera where you’re writing maybe a couple hundred times maximum in the life of the drive, and you’re writing at the speed of moving a five-megapixel image or maybe even high def video at thirty megabytes per second, it’s different than what you’re trying to do on a laptop. And so you do have different rates.
The most simple grade would be a USB or an SD card, and those are designed around consumers who are just moving small amounts of data rarely. You’re not using it all day, every day. And then you move to your net top, net book type design, where it’s still small density and it’s off-the-shelf components building a small density low cost SSD. And at the high end, you have some drives like this one that have architecture design and route very high performance. Ten channels and parallel operating versus maybe two or four in the lower end drives for net book type applications.
Rob Wray: Right. So the other thing you guys have done, you were telling me before is you’ve written some – in your controllers, you’ve written things to optimize the write processes so it’s faster. You have more DRAM and things like that that you wouldn’t see in consumer grade.
Intel’s Robert Allshouse: Absolutely. A problem inherent in technology behind these is you can’t write a single bit, and so what you may run into is depending on how a vendor writes their algorithms, they may have to do two, four, ten times the amount of writes to change the amount of data they want to change. We’ve optimized that to get down to about 1.1 times. The term’s called write amplification. So it says that if you’re at two times versus one time, you’re getting half useful life. We’re only doing about 10% extra writes and we’re best in class in the industry.
Rob Wray: That’s great. Well, thanks for taking the time to give us such a thorough interview.
Intel’s Robert Allshouse: My pleasure. Thank you, Rob.
[End of Audio]