Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

SSD Drives, Issues

Status
Not open for further replies.

MrAl

Well-Known Member
Most Helpful Member
Hi there,


As you all know by now SSD drives are slowly creeping more and more in to the lower cost market so almost everyone wants to get one. They are supposed to be faster and more reliable. There are issues however and i thought we might discuss some of these issues here. Apparently there is no 'Computer' place on ET yet so we would have to talk about it in this area.

I've found some issues with XP such as where it's a good idea to set a certain size offset on the drive before installing the XP operating system. I think SP2 is a min too with these. With Win7 i think most of the issues have been addressed in the install program for that op sys.
It seems that due to the limited number of writes per SSD cell it is a good idea to turn some things off and move some files and folders to other mechanical platter type drives.
Questions come up such as how to move the recycle bin, but i've found out that this is one thing that doesnt really have to be moved.

Any other stuff you care to add would be nice to talk about here.
 
Oh hey that's great (about the new forum that is). Maybe someone could move this thread into that area instead then.


You think they have an ENTIRE set of new cells ready to go? For a 64GB drive that would mean they have to ship a 128GB drive with only 64GB showing up right? Could this be true?

[MOD NOTE: Thread moved into Computers category]
 
Last edited by a moderator:
Hi,


Thanks for moving this thread. I didnt know that the Computer section had been created so quickly. Nice.

Anyway, i guess not that many people here are using the SSD drives yet. I thought more people here would be using them already, but i guess not for a while yet.

There is a lot of talk on the web about them however, some of it is interesting and i think some of it pertains to older model drives too so some caution has to be used about what to take as fact and what not to.

For one thing, the boot up speed is supposed to improve, but i havent noticed any difference to speak of. Maybe a tiny bit faster, but that's not much. Maybe with Win 7 there is more of a difference, as i am using XP.
 
Modern SSD's make use of 'wear leveling' in hardware so that moving certain files into different places does not need to be performed by the user, and is instead done in hardware. There is generally a small area on the SSD which can tolerate many more read and write cycles than normal. In this section the hardware keeps track of how much each sector on the SSD has been used and writes to the least used sectors accordingly.

Simply check whether your SSD employs wear leveling technology in order to see if you have to do it yourself.
 
Hello there Gobbledok,

I've read about wear leveling, but have not looked into the details of how it works on different drives yet. That's very interesting information and yes the one i happen to have does have wear leveling, and i am happy about that of course :)

I didnt know about the 'special' area on the drive that can take more write cycles than the rest though. With some drives i have read about leaving a percentage of the drive 'open' for system use, such as 10 or 15 percent, and so that means not using the entire 100 percent of the storage space. Perhaps they have changed that now too.
 
Hi MrAl,

You're right, older drives used to have some percentage of unused disk space in order to implement wear leveling. Some drives still do I believe (e.g. if you see a drive advertised at 120GB instead of 128GB then it uses this way).

Anything which uses a 'Sandforce' SSD controller uses the new (much better) way.
 
I've read about wear leveling, but have not looked into the details of how it works on different drives yet
You probably won't find much of any details concerning how they function, companies will black bag their methods as much as is physically possible to prevent other companies from copying their methods. Googling wear leveling is about as far as you'll get, if you ask me it's one of the many technical details about a computer that I could care less about as long as it works => Sandforce happens to make some very sophisticated SSD controllers, Googling more about them might net you some information about rough methods.
 
Hi,


Thanks for the reply. I've been looking around on the web and as you mention there isnt a super ton of information out there yet. I guess we have to wait for some disgruntled employees to blast the web :)

I did find a drive with 120GB capacity and using Sandforce. Im keeping an eye on this technology because i think that it may one day replace mechanical hard drives. One of my age old beefs with floppy drives was that they were too slow, and then came platter hard drives which were faster. Now we have the SSD, which outperforms even the fastest mechanical hard drive. That's very good news to me since i noticed that some file transfers (copy from one place to another especially from one hard drive to another) involving a large number of files (like greater than 1000 or maybe more) take an awful long time to copy relative to what the drives is supposedly capable of doing. I found out that one of the reasons for this is because the 4 kilobyte read and write speeds on a typical platter hard drive are actually 50 times slower than the rated bandwidth of the drive for sequential operations. This means instead of 15 seconds it takes 3 to 5 minutes to copy 10000 files of average size around 100 kilobytes. Copy a single huge file that has the same total number of bytes however, and it copies much faster (like 15 seconds).
Now AHCI was supposed to take care of that at least to some degree, and it does, but the overall average performance may not change that much because of the physical layout of the store data at the time of the copy operations. It only saves seek time when the layout is such that the head wont have to move as much as when the copy commands are sent at random. Thus, many real world testimonies are filled with talk about how it often doesnt help much at all.
The SSD is not as fussy about where the data actually is, so it ends up being about 10 times faster for those small read/writes, which can make a big difference.

One thing that i havent seen though is a much faster boot up speed. I expected it to be faster, but it seems about the same as with a regular hard drive. There's just too much housekeeping to do during the boot process i guess.
 
MrAl, it'll never happen. The kinds of designs that make this stuff work and work well are distributed, there is no 'master mind' you'll never find these kinds of secrets available publicly, it'd be like finding the full unadultered schematic for a modern Intel processor available on the web, not the instruction set, the actual cmos masks. The raw concepts themselves aren't anything new though implementation is the key to making money in the broken patent market, and from there it's only a matter of time before reverse engineering can override secrecy.

SSDD are already supplanting physical hard drives but only for fast access on mid range storage, magnetic drive density continues to go up for bulk storage if not speed.

Boot speeds are by current OS standards actively becoming obsolete. hardware/software have come so far that boot times with more modern software and hardware will continue to decline, the market demands it.

Most of a boot is fault checking and memory clearing etc... Housekeeping stuff. Shorter POST methods have already decreased boot times drastically in recent memory. Once the machine is in a know state standby and what not take care of the rest.
 
Hi Scead,

Thanks for the info. When i talk about boot speed, i mean the time it takes the OS to get up and running, after the bios has done its initialization.
I havent seen much of a change, but then i havent used hibernation either like maybe some are using. I couldnt see writing 4GB to the new drive every time i shut down, but then again i bet the boot would be a lot better. Maybe i'll try it a couple times just to see how it works.
I turned off hibernation because it is supposed to make the drive last longer due to NOT writing 4GB every shutdown, but then again if i dont shut down that often (say once or twice a day) maybe it wont make that much difference anyway.

Maybe you know if there has been any improvement in the write cycle life of a cell in these things. Last i read it was some 10,000 (that's ten thousand), while Flash is more like 100,000 (one hundred thousand). I'll be real happy if they ever get that high with the same speed or better of course.
 
Thanks for the info. When i talk about boot speed, i mean the time it takes the OS to get up and running, after the bios has done its initialization.
The boot process isn't limited by the hard drive primarily, it's limited by the OS itself and it's initialization functions (outside of the bios) which are by large and far clunky and slow, 80+% of that time is WAITING not actual work. This is most poignantly shown by watching a modern non-optimized version of a Linux kernel toboot on a PC and watching the text init process the entire time. All the time spent is in delay loops, waiting for some piece of hardware to init, or for some hardware detect to finish. You could optimize a Linux boot to 25% of the auto-detect time by properly configuring it.

This has come to light with Windows 7, and Windows 8 is supposed to improve on it even more drastically trying to reduce boot times to virtually zero by a sort of hybrid hibernation that doesn't require an entire memory dump, only certain parts of the OS memory are saved from the last known stable boot.

Personally hibernation is silly, my system never shuts down it lives in standby, or I just leave it on, I allow my torrent software to seed constantly. But I don't have limited bandwidth for my net connection or limited power problems.

Anyone that thinks their saving money by shutting off their computer needs to grow a clue and do the math to see how much money it costs to run a PC for a year, it's rediculously low, you could save more money in energy converting 1 or 2 incandescent bulbs in your home to CFL's.

My machine comes up from standby in 10 seconds and if I leave it on all I have to wait on is the CFL driver on the monitor to stabilize, if I bought an LED based display for my computer I wouldn't even have to wait that long.
 
Last edited:
Hi Scead,

It's not that i question the OS ability to boot, i can imagine that it is using low order drivers to read the HD too, it's just that many people on the web claim that their OS boots faster with the SSD, but as i said i dont see that much of a difference.
I'll have to do more experiments i guess. Hibernation does get it up and running faster (i tried it since yesterday) but im not sure yet if i want to use that much or not yet.
I like to shut down my system rather than keep it in standby. Standby means that if during the 'off' time if the power goes out you loose your desktop and have to reboot anyway. With hibernation, you get back up and running even if the power goes out while it is off.

My monitor seems to 'boot' up quick so i dont have a problem with that.
You have seen any LED backlight units? Did you notice an increase in the contrast a lot or just a little, or any other differences you've noticed?

Any idea if the SSD write cycle cell life has increased at all since the 10k figure came about?
 
Last edited:
It's not that i question the OS ability to boot, i can imagine that it is using low order drivers to read the HD too, it's just that many people on the web claim that their OS boots faster with the SSD, but as i said i dont see that much of a difference.

That is emphatic proof that HD access is not a major determining factor in boot times, I'm sure there is some difference but the process itself is mostly chronic delay, hardware discovery, and excessive power on self tests. This directly shows that the OS's boot process itself is the limiting factor. You should look up the cold start boot times of Windows 8 for comparison.


I like to shut down my system rather than keep it in standby. Standby means that if during the 'off' time if the power goes out you loose your desktop and have to reboot anyway. With hibernation, you get back up and running even if the power goes out while it is off.
If you're worried about losing the desktop during standby due to location buy a cheap UPS one that can hold up system power for the requisite shutdown sequence, they're quiet inexpensive, and the batteries only need to be replaced every 3-5 years.

I don't have much comparative experience with LED blacklights, their longevity is superior to CFLs by far, and they're better for cold starts, their brightness can be much higher which will increase the contrast as well, I personally don't think they're all that important, the major advantage is you won't see LED units failing in 2 years from bad CFL drivers. A well built CFL model is just as good as an LED one unless you're going for a truly epic media experience, but ten again if you want that ditch LCD's completly and jump on the OLED bandwagon, some OLED displays are starting to hit the market, they're much more expensive, but the brightness and contrast ratio is drastically better, I personally would like one because I have some eye fatige issues with the polarized light that comes from LCD's, OLEDs don't have this problem as they directly emit light rather than block the backlight like LCD's.

I know nothing about modern SSD write cycles, I doubt they've gotten much better, but good wear leveling mitigates that drastically. As the years go by and production methods for insulators on small scales improves the number of possible cycles will keep going up, as well as density.

As far as overall system speed I'd lean towards doing a simple striped raid array on 10krpm drives rather than go to SSD. Raid controllers are standard fair nowdays and hard drive prices are absurdly low.

You could always skip SSD drives and go to ram drives. They're expensive as all hell but the limitation on read/writes is only limited by the bus speed. There are units that have battery backup for power failure.
 
Last edited:
Hi again,


I think i see what you are getting at now. I suspected that Win7 boots faster and i bet that's why some people are claiming their machine boots faster. However, i have a friend that uses Win7 and he claims there isnt much difference either, so i'll puzzeled.

The LED displays are supposed to have better contrast because for one thing they can use dynamic local dimming which LCD's cant.

Im not sure i want to get involved with the UPS's yet.

Yes raid is still being used, but i cant see buying 2 or more of the same drive just to have a faster hard drive.

You mean people are still using Ram drives? Im not into going to too much expense though either.
 
The LED displays are supposed to have better contrast because for one thing they can use dynamic local dimming which LCD's cant.

You mean CFL's? The only difference between an LED TV and an LCD TV is the backlight, not the inherent display technology, they're both LCD based, it's a horribly bad acronym mashup. In an LED display the backlight is provided by white LED's in a standard LCD they're provided by the CFL tubes.

I've taken apart several LCD monitors and I see how the light spreaders work. I can see the possibility of an LED unit increasing contrast by using dynamic dimming, but even high quality standard CFL based LCD's are so high contrast it's absurd!

As far as I'm concerned LED displays are a 'day late dollar short' The cost/benefit just doesn't exist. OLED displays, which pixels are generated by true LED's are so superior that once the manufactoring of them scales up (and it currently is) LCD and LED based LCD's will be obsoleted within 5 years.

I'm not sure why you'd avoid the use of a UPS... they're cheap.

Many people would disagree of your dislike of raid, there are people that will actually run raid arrays of SSDD's

As the controllers get better bulk and random access will increase substantially.
 
Last edited:
You mean CFL's? The only difference between an LED TV and an LCD TV is the backlight, not the inherent display technology, they're both LCD based, it's a horribly bad acronym mashup. In an LED display the backlight is provided by white LED's in a standard LCD they're provided by the CFL tubes.
Yes, sure.

I've taken apart several LCD monitors and I see how the light spreaders work. I can see the possibility of an LED unit increasing contrast by using dynamic dimming, but even high quality standard CFL based LCD's are so high contrast it's absurd!
The dynamics are very different though. LCD monitors loose some of the blacks and you'll read about this on the web. It doesnt seem like it matters, and it doesnt all the time, but when a dark scene comes up you can see how bad it gets. The images get very hard to make out. That's because the dynamics are shifted toward the light tones, the upper color values. There are more lighter tones than darker tones even though you dont notice it all the time. Some monitors actually have a CFL that can dim to help with this.

As far as I'm concerned LED displays are a 'day late dollar short' The cost/benefit just doesn't exist. OLED displays, which pixels are generated by true LED's are so superior that once the manufactoring of them scales up (and it currently is) LCD and LED based LCD's will be obsoleted within 5 years.
Yes but we're still waiting.

I'm not sure why you'd avoid the use of a UPS... they're cheap.
I dont need one.

Many people would disagree of your dislike of raid, there are people that will actually run raid arrays of SSDD's
I agree, but i dont need super super fast speed, i just need fast. If you need really fast speed that's different. It's not a necessity for me, just something interesting.

As the controllers get better bulk and random access will increase substantially.
Controllers for what? SSD's ? That would be nice too.
 
SSDD are already supplanting physical hard drives but only for fast access on mid range storage, magnetic drive density continues to go up for bulk storage if not speed.


I work on a EMC VMAX, it is some really neat stuff and I think we have roughly 120TB of usable storage divided across three tiers (SSD, Fiber channel and sata). What is interesting is the way the system distributes data across the tiers, it is able automatically move "hot-spots" between the different types of storage. When a Oracle or DB2 system is connected we will see ratios of 10% ssd, 40% FC and 50% sata. It basically is a good way to maximise that expensive storage and prevent waste. Overall performance increases too since we have several hundred servers connected to this thing.

Another side effect is you no longer have to worry about file system layout and raid striping on the host level, you used to have to be very careful about table layout and disk distribution as the SAN manages that for you now.
 
Hi,

That's quite interesting. When i did my system i had to do all that by hand, choosing what should go on the SSD and what should go on the SATA. It took a while to decide and im still not really done totally yet, still deciding what to do with some things. It's not just a matter of speed with a personal computer though, sometimes it's about keeping some things 'together' on the same drive so they are all in that group when you go to look for them later.

One problem that comes up which is quite annoying that i havent looked up what to do about it yet is, when a drive letter is changed on say a SATA drive (substituting the SSD for a drive) all of the 'Shortcut' links becomes dead ended. The only solution seems to be a program to look up all the shortcuts on the drive and update them with the proper information (ie the new drive letter). Im still working on that however :)
 
I'm not sure if you are using a LVM (logical volume manager) but dead 'shortcut links' would probably be a moot issue. Maybe I missed it but which o/s are you using? When using lvm you can move physical volumes (disks) in and out of a group, still maintaining the original drive letter or in my case, the mount point.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top