The writing on the wall
19 min readBackground
I’ve been writing code for almost 20 years now. Professionally for about 10. As a kid, I was fascinated with computers, when I got my first computer I would sit glued to the CRT monitor for hours on end (20/20 eyesight by the way). At first it was all about games, but after a few years, I found out you can MAKE games. With something called programming. I was sold.
Back then, in my country, the internet was not yet a widespread thing, so the only available resources were very old books on C, C++ and Pascal, I think I also got my hand on a FOXPro book one time. Didn’t do much with that at the time, just stupid little command line programs without much fanfare. But eventually my family got an internet connection and that’s when I discovered the true wonders of programming, putting graphics on the screen, that was a real game changer. SDL, raycasting, raytracing, WinForms, game maker, all things I experimented with as a teenager to make the computer do my bidding. I was sure, back then, I had no idea what a software developer was, but that was what I wanted to do.
When the time to pick a university came, I did not know much about the state of the local IT industry, what jobs were available or how good the pay was compared to any other domain. But I knew if I want to get in, I need to follow a technical university, preferably the best, it had to be one of the Faculties of the Polytechnic University of Bucharest.
That’s where after a few years I had my first entanglements with the concepts of machine learning and neural networks. Really fascinating stuff. The domain was so appealing that both my bachelor and master thesis were related to machine learning approaches for image manipulation. Following this, I also entered the PhD program, unfortunately, due to various factors such as the COVID pandemic, focus on my day job and career and a huge feeling of inadequacy related to the necessary compute power that was needed to get interesting results in the field of machine learning and AI (this is circa the time where GANs and generative models were all the rage, and the famous “Attention Is All You Need” paper was fresh off the printing press) I made the decision to stop pursuing a PhD. In hindsight, that might not have been the best decision seeing where things are headed now.
During the final years of my bachelor studies, I landed my first real software development job. It was an outsourcing job, most software development jobs in my country are outsourcing jobs. It was a great learning experience, where I picked up most of what I know now. I even had the opportunity of developing an automated chatbot for a large company, back when LLMs were not a thing, and available NLP solutions at the time seem like banging rocks together by comparison . When I started the doctoral program, I made an attempt to switch tracks from software development to a machine learning position. I even signed the contract, but my current employer at the time made me an offer I could not refuse.
I did eventually make a sideways move, when I received the opportunity to start working in game development. My dream job, finally available to me. I would no longer be a technical lead on various projects, conduct meetings and business analysis sessions with clients, things which I loved and was good at, but, I would be working on games. Finally I would work on creating something that would reach a wide audience, not just internal tools for various businesses. This move also allowed me to step out of my comfort zone, gone were the days of writing back-ends in .NET and front-ends in angular, now I was writing gameplay code in C++, infrastructure back-end code for an always online action RPG in .NET and manage and deploy servers in AWS.
The purpose of this section was to highlight that I had two opportunities to be somewhat, at least tangentially, on the other side of this thing. Because from where I’m standing, it does seem like machine learning engineers will have some semblance of job security for a while longer compared to other kinds of software engineers.
Enter ChatGPT
Pretty early in my game dev career, OpenAI took the world by surprise by releasing ChatGPT to the public, a newfangled thing called a Large Language Model. While there were also a lot of hype and pomp related to various generative models that could generate images from text (diffusion), at the time, that felt more like a curiosity. An interesting toy. It was clearly not any real threat to capable artists. Large Language Models did something different, they took in text, and spit out text. Of course ChatGPT was not the first LLM, people were already toying with its older brothers GPT-1 and GPT-2, but ChatGPT was different, it wasn’t just a text generator, it answered queries, sure many answers were hallucinated but it was a completely new way to find instructions and information, as sadly Google search quality was already on a downward spiral.
You could say I was both an early adopter and a skeptic. I bought a ChatGPT subscription as soon as it became available in my country. It was a fun little toy, and an interesting environment on which to riff ideas. But I was also skeptical, I did understand that should this thing become more capable, white collar work would be in grave danger, but I also falsely assumed that the rapid incremental changes were just because we were still in the nice part of the sigmoid curve, and I assumed that we would get into diminishing returns territory pretty quickly (which might still be sort of true, but it seems corporations and governments are prepared to throw all possible resources at this problem). I also adopted the “it’s just a stochastic parrot” mantra pretty early, a thing which does not seem to be so self-evident any longer - no I don’t have AI psychosis I promise.
I have enjoyed the fruits of advancements in LLM capabilities. I’ve even started toying around with local llms, emboldened by the sputnik moment that was DeepSeek. I was sort of paying attention to the industry, and saw how many startups were writing IDEs with AI integrations, at millions and billions valuations, and how slowly this newfangled thing called vibe coding was starting to enter the zeitgeist.
So a few months ago I tried cursor. I was not impressed. It seemed really bad at writing and understanding C++ code, it ate closing brackets, hallucinated functions, etc. It was not ready. So I limited my AI usages to writing very small, piecemeal functions, maybe devise or adjust some algorithms, and as a sort of documentation oracle, but not much beyond that, and I also mainly switched from ChatGPT to Gemini, it was free and seemed more capable at focused coding tasks, coupled with a somewhat more robust context window. I saw many people singing Claude’s praises, but the free version that I tried did not impress me at the time.
The Present
As I returned from the winter holidays, I find my brother in the middle of trying out various VS code extensions for coding Agents. I was skeptical: “Yeah, I don’t know about that, you know I did some experiments a while ago and it was bad, you don’t want that touching any production code”. And he tells me, no, this time it’s different, there have been many advances, and we need to start practicing “vibe coding” asap if we want to keep up.
So I get a copilot subscription, as it offered great bang for your buck, pay Microsoft $10 a month and you get access to Google, OpenAI and Anthropic’s models. Very convenient. So I try it out, using Opus for planning and Sonnet for actual implementation, and holy shit, it’s really good!
Is it perfect? No of course not, it still gets confused about Unreal’s smart pointers sometimes, and chooses very lazy solutions for problems at times, but with proper supervision, careful planning, it’s a huge force multiplier, I am not exaggerating when I say it’s an easy 10x to 100x increase in productivity, depending on the context.
Is it ready to replace programmers? Not quite, not yet, not all. But I can see the writing on the wall. I used it to write a customer support/ administration platform for our game’s backend servers and I had all the desired functionality ready in about two days, plus another two days of debugging and ironing out the kinks. Something which we have been putting off for a while because it would have taken a competent programmer probably between two weeks and a month to have all that functionality done. It’s not looking good for people just now entering the industry, unfortunately the incentives of our capitalist society do not leave a lot of space for inefficiency. The software development job market was already looking pretty rough post COVID, and has already suffered many losses due to the AI hype, but it seems to be the case that it is no longer just hype. I foresee that job openings for entry level positions will continue to dwindle.
The game development industry will probably still be fine for a while, as code needs to be fast and good, not just working, but that will probably not be a problem for much longer. It’s not clear how aware the rest of the software development industry is regarding the state of these tools, but it does seem like the outsourcing industry is gonna suffer a pretty big contraction / stagnation. What client is going to pay an outsourcing company to develop a product with a team of five people for a few months when AI can spit out a working prototype in a few days? Sure there are security concerns, and there will be bugs to fix, but a senior developer + AI can now do the work of a whole team. It is not looking good, so from that point of view, maybe it was a lucky decision to switch tracks. I’ll have to contact some of my old colleagues from the old job to get a sense of how things are looking from over there. But it does seem grim, AI does not tire, it does not sleep, it does not take breaks, it does not pause to doomscroll.
I’m no authority, nor an expert but I’ll go ahead and say it: AGI is already here. It’s artificial, and it’s generally intelligent. It excels at the kind of tasks you used to need highly skilled and educated humans, it’s probably not smarter than all of them, however, as controversial as this might seem, I do believe it has surpassed the capabilities of your average human. There are probably edge cases, but, it walks, talks and quacks like AGI to me.
The Future
So where does this leave us? It seems coding agents are here to stay, and it’s very likely they will continue to improve to the point where very little human involvement will be needed. Anthropic CEO Dario Amodei was already touting that 70-90% of code being written at the company is done by Claude. Don’t get me wrong, I’m not writing this to spread doom and gloom and I’m certainly not an AI optimist, I was very pessimistic about AI capabilities, but if I can see the error of my ways, it’s clear many people will soon wake up to this revelation.
It seems obvious we need to adapt, and start working at improving our prompt-fu. The only probable alternative would be to act like the luddites before us, and start smashing down the mills data centers, and begin the Butlerian Jihad. I joke, of course, I do not condone violence. But it is clear that keeping our heads in the dirt will not do us much good.
The problem is that it seems to me that all the major players in this game are stuck in an ugly prisoner’s dilemma. Because if everyone would cooperate to not bring artificial super intelligence into existence, the status quo would be preserved and all would be good. But everyone also is aware that whoever defects and brings online the first superintelligent AGI will win everything. I’m not sure whether LLMs can give us superintelligent AGI, but if it would be possible, it would be a very winner-takes-all kind of scenario. I hope it’s not a hot take when I say that superintelligent AGIs are the new nukes, and there is an arms race going on right now, at least, that’s how the major players are acting like. They might be wrong, but they are making decisions that affect the whole world as if they are not. Of course, the only winning move is not to play, but they’ll probably not reach that conclusion until each country has their own very capable AGI agents.
Possible Scenarios
Positive-ish scenarios
Either by some physical limitation, or by human will, superintelligent AI is never developed, and status quo is preserved
Diminishing returns save the day
The hopeful and optimist scenario is that we are indeed approaching the flat part of the sigmoid curve. Physical limitations and resource limitations in hardware and energy will prove an insurmountable wall and LLM performance will plateau, this will be as best as it will ever be. VC money dries up and companies need to start charging the real cost of running these models, making them probably comparable in cost as a human.
While certainly a nightmare scenario for VCs, tech CEOs and would be techno feudalists, as well as probably the harbinger of a terrible economic downturn that will occur once the bubble pops, at the end of the day, white collar jobs will be saved, all workers will just be much more productive, and hopefully they will see a bit of the fruits of their AI assisted labor.
How likely is it? I have no idea how likely this scenario is, but it is plausible, but it could also just be wishful thinking.
Regulation nips the problem in the bud
Due to great societal unrest as more and more people become disenfranchised, governments step in and by some miracle, countries decide to all cooperate in order to prevent great societal harm, unrest, and the collapse of capitalism as we know it. They come up with regulations and limit capabilities at the hardware level so that no private entity can build a dangerous AI. The status quo is preserved, while each state will probably have a top secret military-grade super AGI in their basements for tech development and research purposes, but life for everyday people remains mostly the same. Until one of these breaches containment at least
How likely is it? Sadly, I don’t have much faith in our world leaders, coupled with the fact that the prisoner’s dilemma predicts that all involved parties will betray, it does not seem as a likely scenario.
Post Scarcity Communist Utopia
Singularity is achieved. A benevolent and omnipotent AI takes over and solves all of humanity’s woes. We enter a post scarcity society where no human needs to work, and all our needs are taken care of by AIs and autonomous robots. Humans are free to pursue poetry (more likely consume ai slop) and explore the stars.
How likely is it? …And then I woke up
Doom and gloom
One or more of the big players achieve superintelligent AGI, but it’s not a benevolent singularity. I think in this scenario, no matter which way things go, this will spell the end of capitalism as we know it. A massive amount of highly educated, well paid people will become obsolete. And this will drag the whole system down with them.
I’m no economist, so maybe I’m overstating the importance of a healthy middle class population. I have a friend with whom I share some of my concerns regarding my future obsolescence, and he always replies something akin to: “Just become an electrician bro, get into a trade, those will always be in demand” But this does seem to me like it is not considering all the implications of the situation. If all white collar workers start going into trades, supply and demand equation for trades people will be absolutely scuffed. Who’s gonna pay for plumbers, HVAC technicians and electricians when only 1% of the people will afford to pay for these services, while 99% of the people will be competing for these contracts? Without a healthy middle class, who’s gonna be paying for luxury gadgets, food delivery, cleaning services, gourmet food, hospitality, tourism,new constructions, pool cleaning, modern cars, car repairs, etc etc etc.
Our current capitalist system depends on consumption of goods and services as its driving force, so I do not see any way in which it can survive the disenfranchisement of all humans that work in various services such as: software, law, finance, advertising, marketing perhaps even medicine.
It does seem to me like no matter what, modern capitalism will end, and we will enter a hellish techno feudalist dystopia, in which there will be a great divide between the owners of the means of AI and robotics, and everyone else. Perhaps there will be room for a small class of productive and essential people that will be needed to support the lifestyles and needs of this new elite class of techno lords.
Here’s some of the possible scenarios I can envision (of course they all depend on the precondition that the sought after super AGI is even attainable):
A boring dystopia
The owners of these AI systems will also see the writing on the wall, they will do the maths, and figure out that even with advanced AI, and a small army of mercenaries guarding each and everyone of them, 99% is still larger than 1%. So in order to avoid great unrest and massacres and anything else that might endanger their new toy, will do whatever possible to appease the rabble.
They will implement some kind of UBI (Universal Basic Income), ensure the people are provided with ample bread and circuses, and start making plans for long term sustainability. Hell this might even pave the way to the utopic post scarcity society, however, I believe that the recent revelations from the Epstein files paint a picture that the elite billionaire class might not have our best interests at heart.
They will likely enact mass surveillance powered by AI, to nip any sort of rebellion in the bud. They might also start imposing reproductive limitations on those they deem unproductive, and slowly, over generations, the population of humans that do not fit this new economic system will simply die off and shrink, until a stable, manageable equilibrium is reached.
How likely is it? Maybe i’m too much of a pessimist, but it does seem to be a likely scenario, even though I really hope it won’t come to this.
No mitigation
They decide they do not care about the societal upheaval and starving masses. They retreat in their self sufficient bunkers while people starve and fight the governments that are overwhelmed and are trying to do what little they can to contain the unrest.
How likely is it? I think the powers that be can easily dismiss this option as non optimal. Even though they have bunkers and supplies, who’s gonna protect the power grid and data centers from the angry mobs? Not to mention their bunkers mean jack shit to a guy with cement truck.
Active extermination
They don’t want to or can’t be bothered to do the “right” thing. Why bother with UBI when the end result will be more or less the same, a decline in total human population. And they can’t just hunker down in their bunkers and let it sort itself out because of the risks mentioned in the earlier section.
Don’t get me wrong, I don’t think all the ruling class elites are monstrous psychopaths, but there are plenty that are and many of them who turn a blind eye to such disregards towards humanity, as proven by the published Epstein documents.
While billionaires conspiring to reduce the human population sounds like a crackpot conspiracy theory, it is something some of them talk about in the open. So it’s not unimaginable that they might decide to rush the desired effect a bit with some automated drones, some engineered virus or who knows what other means.
How likely is it? While not impossible, it would require a lot of work and cooperation from third parties in a way that make it simply unfeasible to do on a global scale, so even though some of them might want that, it’s not likely to happen. After all, they will need some people to fight in the thunderdomes.
An unanswered question
So assuming that this super AGI business is a very winner-takes-all situation, and all the players that are involved act as if this is true. They are all spending money and firing people as if the singularity is right around the corner. So, in this situation what are all the billionaires and millionaires that are not players in the AI field doing? There are so many corporations with plenty of very rich shareholders such as Coca-Cola, Walmart, Netflix, Disney, Amazon, Berkshire Hathaway, oil companies, car manufacturers, home appliance manufacturers, i.e companies that make a lot of money, depend on the continued existence of capitalism yet have no will or possibility of creating their own state of the art AI models.
Wouldn’t it be in their best interest to lobby governments to regulate AI, or employ other tactics to stall major AI developers, to ensure that capitalism as we know it does not end? What exactly is their game plan? Are they unconcerned? Are they unaware? Do they think their vast amount of wealth will mean anything in a post capitalist society where the only thing that matters will be who owns the AI and who doesn’t?
Perhaps them not being too worried is a good sign, and we can sleep comfortably. Or perhaps they just don’t see the writing on the wall
Comments