OpenAI is reportedly developing a new generative music tool capable of creating songs and compositions from text or audio prompts.
The project could soon expand OpenAI’s reach beyond text and visuals into the realm of AI-generated sound.
The tool, still under development, is designed to generate music based on user input — such as written prompts or uploaded audio. Sources familiar with the project say it could help users add background music to videos or even generate guitar accompaniments for existing vocal tracks.
While the company has not revealed a release timeline, it remains unclear whether the tool will launch as a standalone product or as an integrated feature within OpenAI’s existing platforms, such as ChatGPT or the video-generation model Sora.
Collaboration with Juilliard students
In a bid to refine the quality of its musical outputs, OpenAI is reportedly collaborating with students from the prestigious Juilliard School. These students are helping annotate musical scores, providing valuable training data to improve the AI’s understanding of musical structure and style.
This collaboration highlights OpenAI’s continued emphasis on combining artistic expertise with machine learning to create tools that can assist — rather than replace — human creativity.
OpenAI has previously experimented with generative music models before the launch of ChatGPT, though these early systems were limited in scope. In recent years, the company’s focus has shifted toward audio-based models like text-to-speech and speech-to-text tools.
By re-entering the generative music space, OpenAI will be competing with other AI music platforms such as Google’s MusicLM and Suno, both of which allow users to create songs from simple written prompts.







