![]() ![]() “The buzz around ChatGPT, and the widespread experimentation that followed, has only underscored that conversation-based AI is paving the way of the future. It is one of the first AI technologies made available to the public in a way that non-technical experts can understand and experiment with. ![]() Large language models, like ChatGPT, are helping change that by making AI usable and understandable to everyone. Gaps in education and training were cited as the top barrier to AI adoption by 63% of marketers and sales reps polled in Drift’s recent 2022 State of Marketing and Sales AI Report. The ability to generate suggested replies by repurposing any available content also provides real-time on-the-job training for new sales reps (which currently takes three months on average), teaching them how to respond accurately and in the right brand voice. The sales rep will have the option to customize the suggested reply before sending, or dismiss it and generate a new suggested reply if the first one didn’t meet their needs. The AI will suggest a reply for the sales rep to use based on their company’s website content and marketing materials, the context of that specific conversation and GPT. ![]() The initial integration provides artificial intelligence (AI) suggested replies in live chat, allowing sales representatives (reps) to immediately respond to complex customer questions without ever having to leave the conversation. Marketing Analytics, Performance & Attributionĭrift, the Conversation Cloud company, today announced a generative pretrained transformer (GPT) integration using OpenAI’s API, available exclusively to select Drift customers.iPaaS, Cloud/Data Integration & Tag Management.Business/Customer Intelligence & Data Science.Audience/Marketing Data & Data Enhancement.Optimization, Personalization & Testing.Sales Automation, Enablement & Intelligence.But I wouldn’t trust gpt-3.5-turbo for anything serious in a professional environment. Most of these “chat with your pdf” systems you hear about are using gpt-3.5, and I imagine for small numbers of relatively simple documents, that must work perfectly. I noted that, given the exact same context documents, gpt-3.5-turbo would fail to comprehend their meaning at least 50% of the time while gpt-4 would correctly analyze them 90+ percent of the time. ![]() I am doing embeddings, so the llm is not determining the context documents, the vector database is. I tried using gpt-3.5-turbo with a document store consisting of a variety of real estate legal, regulatory and public correspondence (bulletins, pamphlets, etc…) and it’s performance was terrible. It’s not only not as smart, it’s relatively stupid. Your issue “there’s room for improvement” issue is that gpt-3.5-turbo lacks the power, efficiency and downright effectiveness of gpt-4. I’ve noticed that when a user says “Hi,” the chatbot seems to lack context from the provided documents. Currently, the chatbot’s responses to user queries are satisfactory, but there’s room for improvement. Greetings everyone! I’m excited to share that I’ve developed a chatbot utilizing langchain, pinecone, and the powerful GPT 3.5 turbo. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |