Studio Ghibli Memes Reflect OpenAI’s Shift Under the Trump Era

The recent surge of Studio Ghibli-style memes on social media can be attributed to several factors.

Notably, OpenAI recently updated ChatGPT, enabling users to generate higher quality images with the 4o model. This update is considered a significant improvement in AI image generation, allowing for more accurate adherence to text prompts and producing images with greater fidelity.

However, the updated ChatGPT isn’t the sole cause for the meme surge.

OpenAI has also eased restrictions on the types of images users can create, with CEO Sam Altman describing the move as a new benchmark for creative freedom. This includes permitting the generation of images depicting adult public figures and reducing the likelihood of ChatGPT rejecting prompts, even those with potentially offensive content.

Altman acknowledged the potential for both positive and offensive creations, stating that the aim is to prevent the tool from generating offensive content unless specifically requested, within reasonable limits.

Users quickly capitalized on these changes, sharing “Ghiblified” images of controversial subjects. Even the official White House account on X posted a Studio Ghibli-style image of an ICE officer detaining an alleged undocumented immigrant.

This shift has been developing over time. OpenAI initially operated as a research lab with tightly controlled tools. Early chatbots and image generation models featured strict content filters to prevent misuse. Over the years, OpenAI has broadened access to its tools through a strategy known as “.”, with the release of ChatGPT in November 2022 being the most prominent example. The company believes this approach is crucial for society to adapt to the changes brought about by AI.

However, this change in OpenAI’s model behavior policies is also influenced by the recent election of President , and the subsequent cultural shift.

Trump and his allies have strongly criticized what they view as censorship of free speech online by major tech companies. Some conservatives have drawn parallels between social media content moderation and the more recent efforts by AI companies, including OpenAI, to restrict the content generated by AI models. Elon Musk has described ChatGPT as having “woke programmed into its bones.”

Like many large corporations, OpenAI is actively seeking to build relationships with the Trump White House. They achieved an early success when Trump, on his second day in office, stood alongside Altman to announce a significant investment in the datacenters OpenAI believes are necessary to train the next generation of AI systems. However, OpenAI faces a delicate situation, as Musk, a key Trump supporter and advisor, has a well-known dislike for Altman. Musk, who co-founded OpenAI with Altman in 2015, left the company after an unsuccessful attempt to become CEO and is now suing Altman and OpenAI, alleging that they have deviated from OpenAI’s original non-profit mission. With Musk holding influence within the White House and leading a competing AI company, xAI, it is particularly important for OpenAI’s business interests to foster positive relations with the Trump administration.

Earlier in March, OpenAI submitted a document outlining recommendations for the new administration’s tech policy, signaling a shift in tone compared to previous communications. The document emphasized that OpenAI’s freedom-focused policy proposals could strengthen America’s leadership in AI, drive economic growth, enhance American competitiveness, and protect national security. It urged the Trump administration to exempt OpenAI, and the rest of the private sector, from 781 state-level AI regulations, arguing that they could hinder innovation. In return, OpenAI offered to provide the U.S. government with “learnings and access” from AI companies and ensure that the U.S. maintains its leading position in the AI race against China.

Alongside the new ChatGPT update, OpenAI reinforced its commitment to policies designed to grant users greater freedom to create with its AI tools, within certain boundaries. Joanne Jang, OpenAI’s head of model behavior, stated that they are moving away from blanket refusals in sensitive areas towards a more precise approach focused on preventing real-world harm. The goal is to embrace humility, acknowledging the limits of their knowledge and adapting as they learn.

Jang cited examples of previously disallowed content that OpenAI is now allowing. This includes generating images of public figures, although OpenAI will offer an opt-out list allowing individuals to control whether ChatGPT can generate images of them. Children will be subject to stricter protections and guardrails.

The concept of “offensive” content will also be reevaluated under OpenAI’s new policies. Uses that may be considered offensive by some but do not cause real-world harm will be increasingly permitted. Jang explained that previously, the model rejected requests such as altering a person’s eyes to appear more Asian or making a person heavier, unintentionally implying that these attributes were inherently offensive. Such prompts may be allowed in the future.

OpenAI’s tools previously prohibited the generation of hate symbols like swastikas. However, Jang stated that the company recognizes these symbols can sometimes appear in educational or cultural contexts. They will shift towards using technical methods to better identify and refuse harmful misuse without completely banning them.

Jang concluded that AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create.