Share
Back to blog
Building a Twitter Personality Test and Twitter Roast: A Deep Dive into LLM Orchestration
Contributor
Introduction
If you landed on this page, you’ve probably heard of our Twitter personality AI tool (you can find it here). We launched it just a few days before the official Wordware launch to generate buzz and support our launch efforts.
Long story short, it went viral and exceeded our expectations. To date, we’ve attracted over 7.3 million users to this side project. The most significant outcome was a spike in users on our core platform, Wordware, which surged to over 300,000 in just a few weeks.
Surprisingly, we also generated substantial revenue from something we initially didn’t expect to monetize. At its peak, while going viral in Japan, we were making over $4,000 per hour. Ultimately, we ended up with well over $100,000, which we publicly mentioned post-launch.
The Secrets: The Art of Prompt Engineering
Now, let’s talk about the secrets behind our success—namely, the art of LLM orchestration. This was the core of our project, enabling everything to function seamlessly.
What many don’t realize is that we spent significant time on various iterations, testing numerous LLMs. There’s no definitive answer to which LLM is best for a particular task. We explored hundreds of prompt chains, various prompting techniques, and combinations of large language models.
Achieving high-quality results from LLMs took more time than many might expect. One of our key strategies—alongside the rapid iterations made possible by Wordware—was our structured generation feature. This powerful tool enabled us to harness AI effectively by producing structured data rather than just blocks of text. It allowed us to pass results in the desired format to our frontend via API, which was crucial for our project.
Structured generations solve the common challenge of getting AI to generate outputs that are deeper and more useful. When we set up a structured generation, it takes all the relevant data from the Wordapp and feeds it to our chosen AI model. The difference lies in the output: instead of generating unstructured text, structured generations create objects with defined fields and values. For instance, a recipe might yield an object with fields for ingredients and instructions, while a product description might produce fields for price and availability.
This capability is especially beneficial when passing data to other systems, such as databases or APIs. The AI can generate content in a format that can be referenced easily within our application. To create a structured generation, users simply type /structure in the editor and fill in the necessary details about the generation, such as its name and the model to use. Once the Wordapp runs, the model generates new content based on the existing input, which can be easily referenced.
The essence of a structured generation lies in the shape of the output data. Each structured generation has a defined set of fields, where each field consists of a name and a value. The name is always text, while the value can vary—ranging from text and numbers to lists and nested objects. This structured approach helps the AI infer the expected output.
When executing a structured generation, the AI considers both the previous content and the output shape defined by the user. By clearly outlining the fields and their types, we enable the AI to produce content that matches our expectations. If further context is needed, descriptions can be added to help guide the AI’s understanding.
Overall, structured generation was critical in ensuring we received precisely what we needed from the AI, enhancing our project’s efficiency and effectiveness.
LLM Learnings
To summarize our learnings regarding LLM orchestration:
• Iterate and Test: Experimenting with various LLMs takes time; persistence is key.
• Utilize LLM Features: Features like structured generation can enhance output.
• Prompt Chaining Matters: Creating effective prompt chains is essential for optimal results.
Other Learnings
In addition to our LLM insights, we gained several other valuable lessons that contributed to the success of our Twitter personality test and roast AI tool:
- Optimize for Virality: We discovered that users were particularly drawn to the roast feature, which not only entertained them but also drove significant shares across Twitter and other platforms. Recognizing its popularity, we prioritized this component by positioning it prominently at the top of the page. This strategic move enhanced its visibility and encouraged more users to engage with the feature, leading to increased interactions and sharing.
- Simplified Sharing: We understood the importance of making it easy for users to share their results. By implementing user-friendly sharing options and creating custom Open Graph images, we facilitated effortless promotion of their results on social media. This not only helped users showcase their unique analyses but also amplified our reach, attracting new users who were curious about the tool.
Closing Reflections on Roast AI
The success of our Twitter personality test and roast AI tool highlights the practical potential of LLM orchestration at Wordware. This roast AI, also called an AI personality generator, demonstrates our ability to engage users while generating clever retorts and custom insults that seamlessly fit into everyday interactions. Our roast generator has become a favorite among those looking to share a laugh with friends and family, delivering a fun and light-hearted experience that brings smiles to users’ faces.
We’ve learned that the true art of roasting lies in striking the right balance between humor and style, ensuring that every roast is enjoyable and harmless. It’s not just about building a random insult generator; it’s about creating clever retort and memorable roasts that are just what users need for a good laugh. Ultimately, being roasted is not just about custom insult, right? Whether for a special occasion or just for fun, our tool allows users to upload their Twitter handle and enjoy a moment of laughter, encouraging playful and creative exchanges without any worry.
Ultimately, it all comes down to effective LLM orchestration.