
(SeaPRwire) – There’s a great deal of hype surrounding AI agents. For a lawyer at a major law firm, the prevailing narrative suggests they will soon have a team of AI agents to handle various tasks on behalf of their most important clients. The same holds true for a Big Four accountant performing an annual audit for a leading Fortune 500 company. Some of this work is already underway at the most forward-thinking firms; the rest will certainly adopt AI agents in the near future.
OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent built to run locally on a user’s computer, is a clear sign that AI agents are quickly being given expanded responsibilities and greater access—from email inboxes to bank accounts—a decision that carries unforeseen consequences, such as deleted inboxes and Amazon Web Services outages. Peter Steinberger, founder of OpenClaw, shared that he wants to “build an agent that even my mum can use.” But there is a key difference between using technology to improve efficiency and handing over the agency that humans should rightfully hold.
These developments raise difficult questions, particularly for young people seeking agency in their personal and professional lives. Does it make sense to train as an actuary if AI is purportedly skilled at predicting unknown outcomes using data? Is it worth the current cost to train as a lawyer, accountant, or even pursue higher education at all when all the answers seem to be at our fingertips? Put another way, what does agency look like in an era defined by the widespread proliferation of AI?
Silicon Valley is promising a technological revolution that will fundamentally shift how we work, live, connect, learn, and create. Investors are pouring billions of dollars into companies developing and scaling this technology in the hopes of reaping significant financial rewards. Policymakers note that while guardrails are needed, regulating AI now could stifle innovation and disrupt the U.S.’s status as a global leader. Meanwhile, members of the public are grappling with questions about what AI will mean for their jobs, education, and personal well-being.
According to a 2025 Pew Research Center survey, six in 10 Americans say they want more control over how AI is used in their own lives, up six percentage points from the previous year.
While governments and market forces are certainly the most powerful actors, philanthropy also has a role to play in shaping our collective future with AI.
Philanthropy can help ensure we shape our shared AI future by facilitating robust public discussions about the guardrails needed to protect people from AI’s impacts, strategies for building AI with human dignity at its core, policies required to regulate AI agents so they do not replace human workers, and investments that will create opportunities for those most affected by AI—young people.
We must identify, support, and celebrate creative and effective individuals who are willing to take risks to advance humanity’s collective knowledge and wisdom. This threefold approach can center people and the human experience, no matter what direction technological development takes next. It provides a clear framework for evaluating the promises tech leaders continue to make against our real-world experiences with AI in daily life.
Some advocates talk about AI’s potential to accelerate new medical treatments and eradicate poverty, while others promote social media video generators, chatbots, and effortless creation of art, music, and film. The truth is that AI’s promised power to elevate human knowledge and efficiency has yet to be proven at a large scale.
Companies are laying off workers as they shift tasks previously done by people to AI, or using AI as a justification to cut jobs in pursuit of higher profits for shareholders. Teachers are working overtime to understand if and how they should integrate AI into their classrooms, while also trying to determine whether a bot or a human wrote students’ homework. Artists, writers, and other creators are watching as AI tools trained on their creative work are used to replicate their unique styles and cultural contributions without credit or compensation. Parents are weighing the risks of letting their children engage with AI, often asking themselves whether this technology will set their kids up for future success or fundamentally harm them.
This level of uncertainty leaves people feeling like they lack agency, as so many events unfolding around the world feel out of their control.
As we stand at the cusp of AI’s broader societal integration, we must remember that people are AI’s designers, users, investors, and inventors—and we can also be its governors. We have a unique opportunity to build systems with strong ethical frameworks and guardrails. It is essential that philanthropy funds organizations to help shape AI governance, inform public opinion, and innovate how these digital technologies are built and used.
Our future with AI is a story still being written. The stakes are too high to defer decisions to a small group of companies and their leaders. As funders, tech leaders, elected officials, and everyday citizens, we must work together to shape our collective future so it benefits everyone. Instead of a story about how AI agents might form the teams of the future, let’s craft a story about how our young people will have agency in an era of AI.
This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.
Category: Top News, Daily News
SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.