By Davor Birus on October 28th, 2024
One of the most important aspects of RelateWell is its structured feedback system. For users to truly benefit from the app, this feedback needs to be high-quality and comprehensive. As a software engineer with an interest in psychology, I’ve read several books on the subject over the years, but creating this kind of content independently was beyond my expertise.
Initially, I considered writing all the content myself, but I soon realized I could leverage Large Language Models (LLMs) to generate it instead. Early in the project, I used Anthropic's Claude to create the first draft, and the results were impressive. In a short time, the AI generated a structured outline consisting of 8 categories, 32 traits, and 160 behaviors, complete with descriptions—all without extensive prompt tuning.
As the project evolved, I recognized the need for more depth within each behavior. Each now includes five paragraphs of detail covering aspects like "How to Improve" and "How This Behavior Affects Me in the Future." Producing such a volume of content for 160 unique behaviors would have been nearly impossible manually.
To streamline this process, I integrated RelateWell’s editor with Anthropic’s API, enabling me to add categories, traits, behaviors, and insights directly within the app. This integration not only speeds up content creation but also allows for continuous refinement and expansion, ensuring RelateWell consistently delivers high-quality feedback content to its users.
Complete adding notifications
Test the complete sign in experience
Set up RelateWell on the Digital Ocean servers for the first time
Polish the help documents around the app
Do a final polish before the alpha release