The Geospatial Economic Transformer (GET-1) is an artificial intelligence that evaluates potential store locations for a variety of retail brands in major metropolitan areas. You choose the location for a new store and GET-1 predicts its expected revenue.
We built GET-1 to show how transformers can answer economic questions.
GET-1 takes the transformer architecture behind breakthrough advances in artificial intelligence, such as DALL-E, GPT-3, and ChatGPT, and applies it to an important business problem. ChatGPT is a large language model (LLM); GET-1 is an LXM – a large model for other forms of sequential data. Imagine what you can do when you teach ChatGPT to speak the language of consumer behavior.
[Here’s a video explaining how this works].
Transformer-based models like GET-1 make sense of ordered data, such as a sequence of transactions or a stream of clicks. GET-1 combines two sets of geospatially ordered data: (1) a set of stores, which tells the model which stores exist where, and (2) a set of locations, divided into small hexagons, which tells the model where people live (based on census features such as population and median income).
We trained GET-1 to predict weekly credit card spending at thousands of stores across the US. GET-1 learns basic things like how different brands perform and how spending changes with the seasons, but it also learns how stores interact with their surroundings and how they compete. You can add a new store and ask GET-1 what would have happened – not just to the store you added, but also to same-brand and competing stores in the region.
Locating a new store is not a trivial decision, and our current solution is not perfect. Many of the features that impact store sales are hard to quantify or don’t exist in a structured form. For a suburban store, it might matter whether the storefront is along a commercial strip, inside a mall, or part of a strip mall; for a downtown store, nearby foot traffic no doubt drives sales. While our data contain the exact latitude and longitude of each store, they omit many of the features that bring a location to life. As a result, the model attends less to the local context than perhaps it would if that context were more richly described in the data.
From a modeling standpoint, the primary challenge is that adding a new store requires the model to evaluate something it hasn’t previously observed. Whether the model generates reasonable predictions depends on how “close” the new configuration is to the training data. For example, we should expect a more reliable prediction when placing a Target in a suburb than, say, in farmland. The former more closely hews to store location decisions we observe in the world. This is akin to asking ChatGPT a novel question. If your prompt resembles others posed to ChatGPT, its response will likely be intelligent. If you ask your question in Japanese, ChatGPT will not know the answer.
To address this issue, we trained an ensemble of models, each one on a different partition of the training data and with different initial weights. If the models in the ensemble disagree about a placement decision, we cannot make a conclusive prediction with high confidence. In the UI, some locations are grayed out. These are locations for which there is high disagreement between the models in the ensemble.
GET-1 is an early step in our journey to apply LXMs to the complex business problems faced by the enterprise. If the prospect of building cutting-edge AI for the world’s largest organizations speaks to you join us.