In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
> How are businesses going to get money if there are no humans that are able to pay for goods?
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?
They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)
Do people on Hacker News actually believe this? Each one of the four people named built a product I happily pay for! Then they used investment and profits to hire people to build more products and better products.
There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
Our CEO did not write a customary Thanksgiving email. There was nothing from other C-level leadership. I’ve been around long enough to see this erosion in company culture custom. What is happening? Perhaps an AI CEO would have these subtleties.
Really this is the only 10x part of GenAI that I see: increasing the number of reports exponentially by removing managers/directors, and using GenAI (search/summarization, e.g. "how is X progressing" etc) to understand what's going on underneath you. Get rid of the political game of telephone and get leaders closer to the ground floor (and the real problems/blockers).
From what I hear, this will not happen. AI keeps absolutely making up laws and cases that don’t exist no matter what you feed it. Basically anything legal written or partially written by AI is a liability. IANAL but have been reading a tiny bit about it.
How hard would it be to run a simulator with multiple LLMs. Say, one as the boss and a few as employees. Just let them talk, coordinate, and "work"? Could be the fastest way to test what actually happens when you try to automate management.
This is quite literally what we've built @ Gobii, but it's prod ready and scalable.
The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.
Interesting approach, but I mean more in the sense of a multi-agent sandbox than workflow automation. Your project feels like wrapping a bunch of LLMs into "agents" with fixed cadences, it is a neat product idea, even if it mostly ends up orchestrating API calls and cron jobs.
The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.
And they simulate a externalized team where the enterprize that pays the team doesn't knows that it's just AI and just thinks that these chinese/indian/african people of this external team are really bad at what they are doing.
I like the fun part of it. But this is clearly vibe coded slop. The awful pink colour scheme, clickable buttons which don’t do anything bang in middle of the page, the share button which doesn’t really share etc.
And some of the messages keep repeating like carbon footprint etc. Just seems low effort and not in a fun way.
Counterpoints: this joke isn't worth the effort to make it high quality and the jank is part of the joke. AI slop is garbage, presenting it as otherwise would be missing the point.
Joke aside, I do think think someone should work on a legitimate agent for financial and business decision, management, and so on.
Especially "decision making". I find it's one of the things that are tricky, making the AI agent optimize for actually good decisions and not just give you info or options, but create real opinion and take real decisions.
The UI looks good!
Is there a reason this is being shared here? Feels like a collection of tired, trite oneliners that I’d expect to see on Twitter rather than here.
Funny. Infact, the blockchain smart contract (DAPPs) tried this before, by fully automating (they call it democratizing) the decisions. Not sure how it went.
Capitalism requires that capital is owned and controlled by specific people. So, no, there cannot be an AI CEO. In other words, if you say you have an AI CEO, then that entity will be under the control of someone else, whom you might as well call the real CEO.
Just like how Twitter had a “CEO” who was some pliable female who did the bidding of the real CEO: Elon Musk.
There are shareholders/owners and CEOs. You can certainly have an AI CEO if the board of directors wants that. Although depending on the jurisdiction CEOs might need be humans, but surely not everywhere.
And you could even imagine AI owners with something like Bitcoin wallets. So far it wouldn't work because of prompt injections but the future could be wild.
> Capitalism requires that capital is owned and controlled by specific people.
That is an overly simplistic description. One can imagine a board of directors voting on which AI-CEO-as-a-service vendor to use for the next year. The 'capital' of the company is owned by company, the company is owned by the shareholders. This is not incompatible with capitalism in principle, but wouldn't surprise me if it were incompatible with some forms of incorporation.
Though I think the CEO role is realistically one of the hardest to automate, I’d say middle management is a very juicy target.
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.
They're at the center of the hourglass that exists between external (board members, shareholders, customers, partners) and internal (employees) interests.
Looks like that's a response to Linus and Linux community saying that Qualcomm chips I weren't able to run Linux what hey it's good though at least now there's internal support
Can you design an AI agent that I own, to replace me? This is what the market really wants and is probably one of the ONLY things that doesn't exist.
Just let me subscribe to an agent to do my work while I keep getting a paycheck.
Who's giving you that paycheck? Why don't they just hire that AI agent themselves and cut out the middle man?
In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
How do they own it? I see two scenarios.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
If you know contracting, you know that’s exactly how it’s always worked.
It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Perfectly stated IMO.
How are businesses going to get money if there are no humans that are able to pay for goods?
Lots of us are not cut out for blue collar work.
Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
That’s assuming a large chunk of humanity will just lay down and die off.
> How are businesses going to get money if there are no humans that are able to pay for goods?
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
This is disgusting to read, not going to lie. Hopefully the workers just lynch the people who enriched themselves on other peoples work.
As long as someone else is still paying their employees, it’s all good.
Can you explain why we pay Sam Altman & Elon Musk? Or Jeff Bezos & Bill Gates? They’re just middlemen collecting money for other people’s labor.
You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?
They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
A mistake lies in thinking it’s a market, but it’s egregious you’d call it free
This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)
Do people on Hacker News actually believe this? Each one of the four people named built a product I happily pay for! Then they used investment and profits to hire people to build more products and better products.
There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
not unless you can afford your own super cluster. Otherwise, the AI you use will own you.
[dead]
Why would the market want that? Don't be stupid.
The world doesn't want assholes either but here we are
That's the premise behind Workshop Labs! https://workshoplabs.ai
Our CEO did not write a customary Thanksgiving email. There was nothing from other C-level leadership. I’ve been around long enough to see this erosion in company culture custom. What is happening? Perhaps an AI CEO would have these subtleties.
Really this is the only 10x part of GenAI that I see: increasing the number of reports exponentially by removing managers/directors, and using GenAI (search/summarization, e.g. "how is X progressing" etc) to understand what's going on underneath you. Get rid of the political game of telephone and get leaders closer to the ground floor (and the real problems/blockers).
Also replaces lawyers.
From what I hear, this will not happen. AI keeps absolutely making up laws and cases that don’t exist no matter what you feed it. Basically anything legal written or partially written by AI is a liability. IANAL but have been reading a tiny bit about it.
I love that they’re all called David except for Simon
This looks like the perfect counterpart to Boss as a Service:
https://bossasaservice.com/
How hard would it be to run a simulator with multiple LLMs. Say, one as the boss and a few as employees. Just let them talk, coordinate, and "work"? Could be the fastest way to test what actually happens when you try to automate management.
This is quite literally what we've built @ Gobii, but it's prod ready and scalable.
The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.
Disclaimer: founder
Can I get this in an ant-farm mode where I can see them doddle around a cube-farm office?
Interesting approach, but I mean more in the sense of a multi-agent sandbox than workflow automation. Your project feels like wrapping a bunch of LLMs into "agents" with fixed cadences, it is a neat product idea, even if it mostly ends up orchestrating API calls and cron jobs.
The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.
Cool project regardless!
And they simulate a externalized team where the enterprize that pays the team doesn't knows that it's just AI and just thinks that these chinese/indian/african people of this external team are really bad at what they are doing.
Multiple projects for autonomous multi agent teams already exist.
Left to their own devices, the LLMs would probably design a pocket watch.
Not exceedingly so: https://news.ysimulator.run/faq
I dunno the comments here perfectly capture HN-ackshewelly!
https://news.ysimulator.run/item/4317
I like the fun part of it. But this is clearly vibe coded slop. The awful pink colour scheme, clickable buttons which don’t do anything bang in middle of the page, the share button which doesn’t really share etc.
And some of the messages keep repeating like carbon footprint etc. Just seems low effort and not in a fun way.
Counterpoints: this joke isn't worth the effort to make it high quality and the jank is part of the joke. AI slop is garbage, presenting it as otherwise would be missing the point.
Joke aside, I do think think someone should work on a legitimate agent for financial and business decision, management, and so on.
Especially "decision making". I find it's one of the things that are tricky, making the AI agent optimize for actually good decisions and not just give you info or options, but create real opinion and take real decisions.
What kind of financial and business decisions? And what will be the metric for “good decision”?
Unfortunately LLM's aren't good at making decisions.
My boss is a pretty awesome technologist, too, but has a lot of time sunk into business stuff.
I sent this along as a joke but I doubt any of us are enthused about working for an AI.
It would be cool to automate more of that business stuff but I suspect it's too "soft" to actually automate.
The UI looks good! Is there a reason this is being shared here? Feels like a collection of tired, trite oneliners that I’d expect to see on Twitter rather than here.
Thank you brand new account, your contributions so far have clearly been more valuable!
> We don't have meetings, we have collaborative ideation experiences
yep, checks out.
Funny. Infact, the blockchain smart contract (DAPPs) tried this before, by fully automating (they call it democratizing) the decisions. Not sure how it went.
Is that you, Delamain?
Shut up and take my money.
in the same vein as http://developerexcuses.com/ (and presumably many others)
https://news.ycombinator.com/item?id=20059894
Called it, six years ago :-)
I can see boards of directors drooling at the potential savings.
Tesla can immediately make a saving of $1 Trillion
Love this one.
unforuntely, that 1T is because Elon's buddies are on the board. They're a bunch of rich human centipedes.
Musk isn’t getting a trillion. Tesla sales would have to skyrocket.
The package doesn't say who the buyers must be. Musk could just have his other pet companies by Teslas to meet the threshold.
Imagine that they do skyrocket but the RoboCEO is in charge trillion gets distributed to shareholders.
Imagine that at least half the shares were held by a sovereign wealth fund that paid dividends to every citizen.
Aw, it's just a joke. I thought someone was ready to really try it.
Eventually, there will be AI CEOs, once they start outperforming humans. Capitalism requires it.
Capitalism requires that capital is owned and controlled by specific people. So, no, there cannot be an AI CEO. In other words, if you say you have an AI CEO, then that entity will be under the control of someone else, whom you might as well call the real CEO.
Just like how Twitter had a “CEO” who was some pliable female who did the bidding of the real CEO: Elon Musk.
There are shareholders/owners and CEOs. You can certainly have an AI CEO if the board of directors wants that. Although depending on the jurisdiction CEOs might need be humans, but surely not everywhere.
And you could even imagine AI owners with something like Bitcoin wallets. So far it wouldn't work because of prompt injections but the future could be wild.
> Capitalism requires that capital is owned and controlled by specific people.
That is an overly simplistic description. One can imagine a board of directors voting on which AI-CEO-as-a-service vendor to use for the next year. The 'capital' of the company is owned by company, the company is owned by the shareholders. This is not incompatible with capitalism in principle, but wouldn't surprise me if it were incompatible with some forms of incorporation.
The way AI (and capitalism really) makes CEOs obsolete is by replacing all companies with just one. So only one CEO needed eventually.
Though I think the CEO role is realistically one of the hardest to automate, I’d say middle management is a very juicy target.
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.
You can make this yourself quite easily.
Choose a UI that lets you modify the system prompt, like open WebUI.
Ask Claude to generate a system card for a CEO.
Copy and paste the output into a system prompt.
There you have it, your own AI CEO.
Now do Dropbox next
Great name.
Can we also replace shareholders with Ai
I don't get why people get a boner with CEOs. They are mostly irrelevant, the real power lies further above.
One mention of 3d printed chicken spins up a new ai CEO, several ai damage control agents, Ai apology, new ai product ads, repeat as needed.
They're at the center of the hourglass that exists between external (board members, shareholders, customers, partners) and internal (employees) interests.
Why waste GPU cycles when a simple bash script would do?
Looks like that's a response to Linus and Linux community saying that Qualcomm chips I weren't able to run Linux what hey it's good though at least now there's internal support