I’ll give you an example of what this looks like, which I went through myself: a couple years ago I was working at PlanetScale and we shipped a MySQL extension for vector similarity search. We had some very specific goals for the implementation; it was very different from everything else out there because it was fully transactional, and the vector data was stored on disk, managed by MySQL’s buffer pools. This is in contrast to simpler approaches such as pgvector, that use HNSW and require the similarity graph to fit in memory. It was a very different product, with very different trade-offs. And it was immensely alluring to take an EC2 instance with 32GB of RAM and throw in 64GB of vector data into our database. Then do the same with a Postgres instance and pgvector. It’s the exact same machine, exact same dataset! It’s doing the same queries! But PlanetScale is doing tens of thousands per second and pgvector takes more than 3 seconds to finish a single query because the HNSW graph keeps being paged back and forth from disk.
Drop all pages where the scavaging count is n
,这一点在whatsapp中也有详细论述
But as Kinsey scientist Justin Garcia told Mashable earlier this year, an important component of human relationships is the give-and-take. AI relationships are likely more transactional.
vivo X300 Ultra 增距镜家族亮相
南方周末:按照你的设想,未来建立生物医药产业的长期产业基金,投资方向应该是什么?