Add Maple

OpenAI’s first developer conference happened. Here’s what we thought of DevDay

OpenAI DevDay

There’s a lot we could talk about but I’ll try to keep it brief. We’re closely tracking OpenAI’s advancements and currently use OpenAI in AddMaple to summarize data containing free text (unstructured data) among other things. We have a few additional features coming soon and are convinced that AI coupled with programming genius can fundamentally improve data analysis, making it more efficient, intuitive, and even enjoyable. But why stop there? Why not innovate beyond data analysis and improve how we write and read, or engage with reports too? More on that below. In short, OpenAI’s first DevDay developer conference didn’t unveil GPT-5, but it brought super exciting opportunities.

Here are our general takeaways:

Longer context support
This is something we’ve grappled with at AddMaple. If you want a LLM to summarize a lot of free text, then the best way is to send it everything in one chunk. However GPT3.5 and 4 maxed out at 32,000 tokens (about 24,000 words). This is still a lot, but wasn’t enough for larger surveys or datasets containing long free-text data, e.g. interviews. While there are existing ways of splitting the text up and creating multiple summaries and then summarizing the summaries, this was slow and introduced other problems.

The new GPT4 Turbo (vroom 🏎 )now supports about 96,000 words or 128,000 tokens. This is more than a typical novel which contains about 80,000 words. While pretty expensive, it unlocks many more use cases (and kills Anthropic’s main USP…)

GPTs and the Assistants API with knowledge retrieval and persistent threads
A large number of AI startups were effectively threatened yesterday with this feature. Without getting too much into the weeds of the tech, OpenAI has created their first version of “AI Agents”. These agents are slightly different to the ChatGPT interface where you ask it a question and it generates a response. Agents can essentially perform more advanced tasks.

Here are a few ideas of what can be created with this feature (generated by ChatGPT of course…)

  • A Custom Family Cooking Assistant: rather than stop with ChatGPT’s ability to generate recipe ideas, you could have a more advanced cooking assistant that is tailored to your family’s dietary preferences, what you have in the fridge and even the kitchen gadgets that you own/enjoy using.
  • An Intelligent DJ That Gets Your Music Taste: We use Spotify A LOT at home, especially their auto-generated playlists. But there are 2 huge problems. Firstly, it sometimes suggests styles of music that I really don’t like, and I can’t easily stop Spotify from playing those. And secondly, Spotify sometimes loses track of, tracks I’ve heard on different playlists within a short period. I also don’t really want to hear a song more than 2 or 3 times in a week regardless of playlist. It should be fairly easy to create an assistant that can fix these issues but can still help me discover more music.
  • A Legal Precedent Researcher: Were you to be in possession of a hefty database containing previous legal cases, you could benefit from a Legal Assistant that can help you find relevant cases containing similar precedents to the one you’re working on currently.

OpenAI’s Business Strategy:

From a business perspective, OpenAI is really interesting. They are going after enormous markets simultaneously and by all accounts are succeeding in all of them!

  • D2C: ChatGPT which is already boasting nearly $1 billion annual revenue
  • Custom Models: which allows enterprises and well-funded startups to train their own models with pricing starting between $2–3 million: https://openai.com/form/custom-models
  • API Platform: OpenAI keep reducing their API pricing, enticing developers from big and small companies alike to build on top of their services. Some of the new features they’ve offered start bringing an element of “lock-in” which will make it even harder for their competitors.
  • Revenue-sharing and Incentivization: OpenAI announced its launch of a marketplace for AI assistants within ChatGPT, offering developers a slice of the revenues they generate.

I’m interested in what you took away from the announcements yesterday?

For data analysis and AddMaple specifically these are some of the ideas we are exploring:

An Insights Assistant to support users with their data analysis:

Our current capabilities— AddMaple detects column types and converts raw data into chart summaries automatically without requiring upfront cleaning/preparation. Users can near-instantly segment/pivot and share these charts individually or add them to a report for their audience to explore.

But with the updates to OpenAI, an Insights Assistant could interpret survey results from AddMaple’s visualizations and could provide natural language summaries of key findings to speed up report writing. For instance, once AddMaple visualizes customer satisfaction across different regions or products, the assistant could automatically generate a text summary detailing which regions/products received the highest and lowest satisfaction.

Custom Query Assistant for Survey Data:

Once AddMaple reads the dataset provided and provides in-browser summaries, we could allow users to could ask a Query Assistant specific questions about their survey data, such as, “Which features received the highest satisfaction ratings?”. The assistant could process this query, retrieve the relevant visualizations from AddMaple, and could then further assist the user by providing additional verbal or written summaries of the charts. Our published reports are currently explorable too. With OpenAI’s announcement, our reports could be queried using natural language assistants too. Not everyone reading a report wants to read. What if they could query the report and the published columns using natural language? Sounds like a great opportunity to me.

Bar Chart

Other Articles

Bar Chart