Last month, I wrote about a semantic subspace in the word2vec model of the Stanford Encyclopedia of philosophy. The idea, in short, is that you can train an extremely simple word-embedding mode (word2vec) on a body of text and then search the embedding space of that model (that is, the vector space made of all… Continue reading Probing the Bureaucrat-Poet Axis