Pinecone Launches Serverless Vector Database

2024-01-17

Pinecone, a leading vector database provider based in San Francisco, recently announced the launch of Pinecone Serverless, a new cloud-native database specifically designed for developers to build modern AI applications based on large language models (LLMs). With the rapid emergence of LLMs like GPT-4, there is a growing demand for vector databases that can provide advanced semantic search and retrieval capabilities to reduce illusions and improve answer quality. Pinecone's original vector database solution has become very popular, with over 5000 customers including Notion, Gong, and CS Disco. However, the company has noticed that the standards for building commercially viable AI applications continue to rise. "After collaborating with thousands of engineering teams, we have found that the key factor in the success of AI applications is knowledge - providing differentiated data for models to search and find the right context," said Edo Liberty, CEO of Pinecone. "We have completely redesigned our vector database from scratch, making it extremely easy and cost-effective for developers to inject unlimited knowledge into their AI." The result of this effort is Pinecone Serverless, a new serverless architecture that significantly reduces costs by separating storage, reading, and writing. The multi-tenant system can run on as many nodes as necessary to handle massive throughput, while supporting usage-based billing. Pinecone's advancements in indexing and retrieval algorithms also unlock the ability to perform vector searches on nearly infinite records spanning block storage. This means that developers can focus on building applications without worrying about infrastructure constraints. Akshay Kothari, co-founder of Notion, sees the tremendous value brought by this combination of usability and scalability. "In order to provide our latest Notion AI product to tens of millions of users worldwide, we need to support RAG processing on billions of documents while meeting strict performance, security, cost, and operational requirements. This would not be possible without Pinecone." Last year, Pinecone raised $100 million in Series B funding, reaching a valuation of $750 million. Pinecone Serverless is now available for public preview on AWS and will soon support GCP and Azure. While the cost and infrastructure advantages are significant, Liberty ultimately believes that Pinecone Serverless is a tool to unleash a wave of revolutionary AI applications. By enabling developers to easily connect vast knowledge repositories to large language models, Pinecone eliminates barriers to creating remarkable, secure, and intelligent AI. "We are here to help engineers build better AI products," Liberty said. "With Pinecone Serverless, we provide them with the most critical component to achieve that vision." Early adopters span across multiple industries, from legal tech to sales analytics. But if Liberty and Pinecone achieve their full ambitions, these pioneers could grow into the first wave of a new AI development paradigm.