WithinTwentyYears we will have unlimited storage capacity.
We have already arrived at a point where several Terabytes of storage are available at a relatively low cost.
Hard drives are at 249$ for 1.5 Terabytes (20081225). Within 5 to 10 years, they'll be at 100$ for 1000 Terabytes . So for 100$ you'll have enough room to store 400,000 hours of video. IOW, everyone will be able to record, edit, and share all the video they want with whomever they want.
Yes, but... operating system and application demands have always increased, too. The operating system will use 30,000 GB before you've even installed an app. Also, 20 years hence, video will have progressed to surround 3D-holo-vision, smell replication and full-touch simulation. To achieve this, each movie could need around 1000 GB, so you'll run out sooner than you think.
Yes, but... compression will have become so sophisticated that you will be able to compress a 1000 GB video file to just a few megabytes. :-)
I have a prized advert from the early 90s for a 40 Meg hard drive (that's Meg, not Gig). The caption says that this "huge 40 Meg drive will be able to store your entire life's work". They just weren't anticipating the growth in software and file size, which continues to this day. -- BrianCooke
Further, within 20 years Microsoft Windows will be dead. For the past 20 years, Microsoft has been reimplementing Unix. However, there are no longer any important features of Unix which Windows lacks, so Microsoft can no longer make important improvements to Windows. This is why the market has largely ceased buying upgrades. Further, substantial improvements to operating systems cannot be made within the Unix architecture. And in the current environment, a radical new operating system can only become popular if it is GPLed. A commercial product cannot compete with a libre product on fair terms.
3D holovision already exists. It goes by the names of virtual reality, virtual worlds, et cetera. Since all 3D is necessarily computer generated, it doesn't require substantially more storage capacity than video.
Hmmmm. Assuming the idea that MS is asymptotically approaching unix is true, there is still a problem: Unix is a (the canonical?) WorseIsBetter solution. We can do a lot better than Unix, even assuming that MS can't.
Agreed. Hence, why MS Windows and all its ilk will die.
Whether this is true (MS Windows and all its ilk will die), or just idle speculation, will be found out in time. Since 1979, I have been using Microsoft products, and have found one thing to be true; that they become better and make me more productive with each innovation. While with each innovation and the rush of the product to market require that new releases to fix the problems always follow, there are definite and useful improvements in scope and performance. This may be an imperfect strategy, albeit a necessary one, but in my experience one which is satisfactory from my Software using viewpoint. -- DonaldNoyes
That is, at best, an unlikely and unrealistic proposition. It takes time, effort, and money to configure an OS to a machine, and more of the same to raise a skilled body of support personnel capable of responding to requests related to three different operating systems.
I said "bootable" as in already installed by the seller by the store support team similar to a "GeekSquad". The user would be involved in the process with the only thing done being the selection of the desired OperatingSystem at boot time for the current session using the BootSelectionMenu. Support would be provided by the seller. The buyer need only insist that during a period (say the trial period) that the seller provides that support. The seller would build this support into the price or make it an added charge.
[What do sellers gain, in terms of financial benefit, from providing multi-boot? They still have to pay appropriate fees for each bootable copy of MicrosoftWindows, but now have to pay staff to install it along with the alternative OSes and support all of them. I see little market for a machine with multiple pre-installed OSes, anyway. Most consumers want a computing appliance about which they can say ItJustWorks. With hardware prices dropping and the capability and ease-of-use of free operating systems (like Linux) closing in on MicrosoftWindows with each new release, there is an almost inevitable point where vendors will find it more viable (i.e., lower cost & higher margin, equal or better appeal to the consumer) to sell hardware with a free OS installed than to sell the same hardware with MicrosoftWindows.]
You are right, Most buyers don't want to buy a machine about which they have to decide between and evaluate what and how stuff is to run. This is why many if not most first time computer buyers (the numbers of this type is a shrinking percentage of all computer buyers), rely on the merchant's expertise in buying both computer, peripherals and software. The merchant will most likely sell the machine with the most comprehensive hardware/software the buyer can afford. It follows that he will probably not make as much money if he promotes a machine which has free operating system, free software and has drivers to connect to everything. While it may be true that Linux is closing in on Microsoft, and that it might be better if not best for the majority of users, it is also true that because something is better/best for users, that the majority of sellers are goint to promote it, or that users are going to make it their choice.
One additional choice available now is that you can choose components that make up your machine and can BYOC (BuildYourOwnComputer) without Microsoft installed, (I have done it, at perhaps twice the cost of one already built and loaded with hardware and integrated with the software to make it work), most people do not want to bother, or do not feel capable of doing this.
This has changed, and now (2012) more and more people are finding enough courage and expertise to do this and merchants are utilizing progressively more of the space to make this possible.
With the discovery of the giant resistomagnetive effect, hard drive capacity has been doubling every 9 months.
But there is a serious problem: the I/O bandwidth between the hard drive and the rest of your system is NOT doubling anywhere near as fast. Within 20 years we will see a paradigm shift where programmers are largely forced to think of hard drives as serial devices rather than random-access devices. They will be great e.g. for backups (on par with tape), streaming audio and video, and so on. They will be less great for ever-larger relational databases.
You mean, you don't think that way already?? BTW, RAID.
[I think you mean stripe, old boy. I'll never own another server that doesn't do at least a 2 by 2.]
Of course, RAID-stripe doesn't help with the bus bottleneck problem at all, only makes it worse in fact.
Anyways, the fact that HDs are serial devices is recognized by LoggingFileSystems. The fact that LFSes haven't caught on says a lot about the unbearably glacial pace of OS development.
I have to disagree. Log-structured file systems are about reliability, not optimizing for serial write access. The data structures required to support efficient *random access* to data can get corrupted/out of sync, and that is why a log-structured file system is helpful. Also, I thought NTFS and ext3 and all other modern file systems were log-structured..(?)
They are not. They are JournalingFileSystems. And though journaling (hawk, spit) is a perversion of LFS for the purpose of reliability, LFS itself was invented for the express purpose of write-optimization to serial devices.
The relevant papers all start with "Given today's large RAM caches, read-optimization is a stupid strategy ... so why don't we write-optimize?"
A prediction: With the recently verified memristor effect, solid-state, random-access, (nearly) infinite write-cycle, non-volatile, micro- or nanosecond access time storage becomes a reality, with bit densities greater than current harddrives. These devices may additionally be accessed in a parallel manner, thus making them word-serial, not bit-serial, as magnetic media tends to be. With such media, the concept of filesystems starts to melt away, with object graphs filling in as its replacement. We can do this because internal fragmentation ceases to exist, for there are neither sectors nor seek time. Semispace GarbageCollection will replace manual filesystem management practices.
What possible applications and uses can we put all of this capacity? (circa 2009)
By now, everyone with a brain thinks of every possible activity in terms of bytes, instructions per second, et cetera. There IS a unitary basis which we use to compare things, which includes everything. And there's nothing outside of that frame of reference.
Backup Costs the Bottleneck?
Seems the costs of backups are more expensive than the disks, and this is perhaps the bottleneck. One "solution" is that if disks get cheap enough then rather than use tape, there would be a couple of backup disk drives for each production drive that would be rotated off-site.
I'll deny I possess unlimited storage capacity unless I can actually write a program that continuously writes to storage and never runs out. E.g. for knowledge representation and inference: keeping permanent record of -every- inference and -every- inference tree and -every- cascading change due to a change in the fact-set, and -every- sensor input (with lossless compression), etc.
I expect, though, that the future availability of storage capacity will still be fixed in any particular computer, and will be relevantly finite... such that 'forgetting' (or 'lossy compression') remains always necessary. One thing to keep in mind is that, while storage capacity does grow exponentially, so also does the ability to create or locate material to store. There will always be more sensors, and those sensors will continue to grow in resolution and bandwidth.
That said, we could probably keep everything ever edited directly by humans. It's keeping up with automatically generated content, continuous sensor inputs from billions of security cameras, program traces, etc. that isn't going to happen.
The computer I am now (2012) using has exceeded 7.5 terabytes of storage capacity, hence I save to it, everything I think I do not want destroyed. Much of it I may never look at directly again. (I reserve for myself the possibility, in the future, of finding it via new revolutionary software to be invaluable and quite useful, I am continually implementing ways to make it as UsefulUsableUsed -- DonaldNoyes 200812281640.20121204
Ever since I've used harddrives in excess of 300MB, I always partition my system drive into at least two partitions, plus one partition for my home directory. Every time I want to upgrade my OS distribution, I always do so to an alternate system partition. I then copy over the files that I need from the old partition. In effect, I'm manually performing a semispace garbage collection pass. Manually deleting files I no longer need has become much too burdensome, and is ultimately cheaper for me to simply copy used files. I often follow a similar approach in my home directory as well: I maintain two subdirectories, one called tmp and one called repos. All new material goes into tmp first. If I feel I'd like to preserve it, I use Mercurial to create a repo of it in repos. Periodically, tmp gets wiped clean. So I guess I also employ generations in my garbage collection too. Very interesting - I never made the connection between my filesystem management practices and garbage collection before now.
I never make everything I might desire to access such that it is stored to be InstantlyAvailable. Some of MyInformation is maintained in a separate ConditionallyAvailableStorageAreas. This is also the location of MyArchives. I utilize partitioning and mapping of drives as well. Some of MyInformation is also available via FTP. The internet is also a DataStorageComponent in MyInformationSystem. An interesting discovery by GE of a means of producing 500GB Discs means one could carry in a briefcase sized case the storage media of at least 15 to 40 Terabytes when in softside cases. This would be a immense amount of AvailableDataStorage. Even so, it would not be more than anyone might possible want to have, even if it would exceed that most people would probably have in their collections. In a typical bookcase shelf unit 14 high by 12 deep by 30 wide, one could store disks holding 1000 Terabytes of information.
http://blogs.zdnet.com/gadgetreviews/?p=3660
I have made steady progress towards the "UnlimitedStorageCapacity", From a shoebox full of cassettes and a 4K TRS-80, progressing in stages, roughly one computer and one storage paradigm per version of Microsoft OSs, until now when I have added to my desktop a laptop with a 64bit processor and Windows Vists and Usb storage soon to measure in several terabytes. I guess the next step will be holographic optical storage with capacities soon to be measured in petabytes, and hand held units or wearable computers connected optically to a super-internet and billions of other computer/persons/sites.
My Laptop now has connected Drives with over 7.5 Terabytes of capacity (2012). Moving toward my present goal of Ten Terabytes of Storage Capacity (will reach this in first-quarter 2013).
How much of your current capacity are you actually using?
With the maturing of the highly advertised "cloud", most people, myself included, will store almost all of their most important and precious data in the "sky", and will connect with it anywhere, using whatever device is at hand. They may do so using their own voice and/or by bodily movements or perhaps mental activity. ( the later by 2050 or whenever the SpaceElevator is built - whichever comes first ).