From running MS-DOS with specs like a 10MHz CPU, 640KB of memory, a 40MB hard disk drive and a 640 x 400 pixel display, to 4K panels, 5GHz chips, so many gigabytes of RAM and a massive NVMe SSD, the rate of innovation in this category has been astounding. But what about the next 30 years? Looking ahead to 2052, what does the laptop look like? It’s an interesting question to ask and to celebrate our 30th anniversary, we’ve got help from a few experts to give us a sneak peek at the future of laptops.

Trucks, not cars

Back in 2010, Steve Jobs made an interesting comment comparing tablets and PCs to cars and trucks — insinuating that computers will continue to be relevant, but will pale in comparison to tablets for the vast majority of consumers. However, things didn’t quite pan out that way. Tablets continue to be great for content consumption, but if you actually want to get things done on-the-go, a laptop is the gadget of choice for most people.  In fact, that much was clear in 2015 when a Pew study showed that 36% of Americans own both of these and a smartphone. That number has most probably increased since then and from personal ecosystem experience, I use my iPad for binge watching and blasting through some Apple Arcade games, whereas my M1 MacBook Pro is where the work happens. People are used to this divide between devices. Yet experts still talk about the death of the traditional laptop and this idea of combining all of these screens into one device. Steve Koenig, VP of research at the Consumer Technology Association, said as much: “I think the laptop will eventually collapse down to a form factor that will probably look very similar to the smartphone of today.” “Laptops will most likely not survive the next 30 years as they are nowadays,” adds Cédric Honnet, MIT research associate. “A laptop is mainly composed of processing, power, storage, inputs and outputs, but its specificity is its portability, and we can easily anticipate that all these elements will be extremely miniaturized.” The thinking behind his big prediction is that while laptops continue to get smaller and smartphones continue to grow in screen size (to the point we now have foldables for bigger displays), the demand will grow for a device that slots in between the two. Koenig specifically mentioned a “10.5-inch” display or “maybe something a bit larger.”  I get the idea of cramming it all into a smartphone form factor too, because the numbers back it up. StatCounter shows that worldwide, mobile continues to have a dominant market share of over 50%. Compare this to tablets at around 2.5% and you can see why analysts have soured on the idea of a tablet being the answer to this question. Honnet, on the other hand, laid out a grander vision that starts with completely eliminating the need for a display and replacing it with a brain-computer interface (BCI). Currently, MIT has experimented with using transcranial magnetic stimulation (TMS), to produce “hallucinating” pixels that display information in your visual field.  Yes, you read that right: electrically stimulating your brain cells to produce pixels of light that you can see without the need for shooting lasers into your eyes like most smart glasses. It’s fascinating stuff and, according to Honnet, it’s “just a matter of time before we get ‘BCI displays’ with full HD.” Here’s my problem with all this, though. Is this not just the definition of insanity: doing the same thing repeatedly and expecting different results? We’ve been trained to expect some kind of “holy shit” new gadget that renders other categories obsolete, whether it’s alaptop/phone hybrid device or a metaverse-esque mixed reality driven by BCI. But the traditional laptop has stood the test of time and weathered these battles, purely because there’s not much out there quite like it. I believe they are both right, but they are talking about two steps in the evolution of this category that we will see over the next 30 to 50 years. Koenig’s vision will be the first step towards this brave new world outlined by Honnet, which is the ultimate “Super Saiyan” form of the portable productivity machine.

Laptop fragmentation

Let’s take this “laptop as a truck” metaphor from a different angle. If these are soon to become specialist devices for getting things done, then, much like the wide variety of truck types and sizes, can they be tailored to specific use cases? This was a question that I talked about in a fascinating chat with Neil Thompson, research scientist at MIT’s Computer Science and Artificial Intelligence Lab, which starts with taking a different approach to Moore’s law. For those uninitiated, Moore’s law was a prediction made by Gordon Moore in 1965 that the number of transistors on a microchip will double every two years. That’s been true for the past five decades, but one thing is clear: the end is in sight.  “Since 2005, the rate of improvement has dropped dramatically. That’s not because we can’t get more transistors on them, but it’s because we can’t run them faster,” Thompson commented. The gains from additional memory would be small when the core source of power in a laptop, the CPU, starts to stagnate. At this point, Thompson argues that these diminishing returns will be felt more if a laptop is more broadly targeted as a portable productivity machine for all, so it’s time to fragment. Neil continued on and said something that made this prediction click for me: “Most of the tools we have in our lives are quite specialized. We have a hammer and we have a screwdriver. We don’t have a mix of the two, because that’s not a very efficient machine.” This is true of computers today. They are not specialized, but rather a very good compromise of things. As Thompson put it: “the hammer and screwdriver continually improved so much over time that it wasn’t a problem.” We haven’t had to address this status quo because the rate of Moore’s Law has kept up with all the use cases.  But with the rate of improvement dropping, there will come a time when your hammer/screwdriver combo is not progressing enough and only specialist tools will suffice. Here’s the fascinating thing: this is already happening. Just look at the Pixel 6. “You can already see this happening at Google, who designed their own chip (Tensor) to do machine learning. This is so the company can specialise and do its own thing in this area to benefit the rest of the phone,” Thompson identified. So, what does this mean exactly? Thompson lays out two future scenarios: 

We could see laptops armed with one of many specialized chips with specialized software stacks. This is great for those who do have one use case, but the risk at that point will be other processes you may want to do just falling off a cliff with bad performance or just not being supported.These specialized chips and stacks may be accessible up in the cloud, which turns the laptop of the future into a remote terminal of sorts. Of course, the risk factor here is your system becoming useless once you lose a connection to the internet.

Thompson believes the answer falls somewhere in the middle of these two, adding that “we are going to increasingly have people say ‘for this particular class of application, we’re going to design a specific chip and specialized software that sits on top.’” The other processes that the chip and software stack has not been co-designed for will live in the cloud.  Out of all the big blue sky predictions made for the future of laptops, this one sounds the most grounded and sensical to me. Instead of bending the hammer/screwdriver combo to all use cases and making it a jack of all trades, just make a really good hammer instead.

Up to the cloud, to save the planet?

One simple thing is agreed across all the experts I spoke to: cloud computing is going to pick up a significant amount of the processes your current laptop does. That much is obvious, as much like the move of your storage to the cloud, most common CPU processes could be easily completed in the same way. This reduces the components required, which reduces the weight and size of these machines and reduces the current needed to run them.  What’s not so obvious is the overall environmental toll of cloud computing. Greenpeace estimates that by 2025, the technology sector could consume up to 20% of the world’s electricity, which would be up from the current 7% thanks to the aggressive expansion of cloud computing on top of all of our current electronics. Converting to completely cloud-based operations in consumer-focused laptops is a good start to reducing our own footprint, at which point, it’s about repurposing this distributed computing. “Some companies (such as Qarnot) are using distributed computing as personal heaters and hopefully this will be the norm,” Honnet comments. Simply put, rather than the likes of gas central heating in the UK, rooms could be kept toasty by a series of CPUs that are busy doing their thing for other users.  It’s worth noting that we’re already seeing the industry starting to make strides towards this environmentally-friendly future, with the Acer Aspire Vero sporting a PCR plastic construction and a design that is easy to open, repair and recycle.  

Ripping up the UI roots

When you think of the user interface of a laptop, you think of a touchpad and keyboard (maybe a Trackpoint nub if you’re a Lenovo power user). It’s one of the most critical things to get right when building a portable PC, and one of the biggest obstacles to get around if the future gazers suggest the laptop will not be around in three decades time. “Indeed, the UI paradigms will have to adapt to miniaturization. It took some time, but smartphone UIs became fairly comfortable, and smartwatches are getting there,” Honnet added, indicating the new UI conundrum is probably going to be the trickiest part of establishing this new product.  “Human Computer Interaction (HCI) research explored extending the interaction surface by integrating display and sensing in the strap, or even on the skin (video projection, etc). Since Google Glass, many devices have appeared and they will eventually enable improving this UI miniaturization trend.” Indeed, we could go even further beyond that and replace the keyboard and mouse with neuromotor signal sensing interfaces. Honnet admitted that they “take time to learn, but so does keyboard typing, and gamification can make the learning process surprisingly intuitive. They can sense muscle signals that are so weak that we won’t even need to move to control our devices.” This will seemingly overcome one of the biggest challenges facing the future of computing: nobody wants to talk to their computer at all times. Koenig acknowledged that, mentioning plenty of specific use cases like working on confidential documents in public spaces like cafes.  It seems like only yesterday when everyone thought the future of tech interaction was speech, but these fatal flaws stopped it from being anything more than just being a way to talk to your phone or your smart home. Maybe interpreting your muscle and brain signals is the way forward. 

Bottom Line

As you can see, the future of laptops is no longer just a technological question, it’s a cultural one, too. This category has defied predictions of an early death at the hands of tablets, and while the rate of innovation has slowed slightly compared to the last 30 years, the next three decades look incredibly bright for the portable computer. What place do they have in our lives? Will we see multiple specialist variations of laptops for different use cases, while a more consumer-facing hardware category takes over for the rest of us? Will work even be done on a QWERTY keyboard, or will our minds take over directly in the future? Everyone agrees that change is coming, but nobody can be sure what that change will be.