[{"data":1,"prerenderedAt":385},["ShallowReactive",2],{"blog-\u002Fblog\u002Fai-tools-for-api-companies-ai-native-design-principles\u002F":3,"related-blog-\u002Fblog\u002Fai-tools-for-api-companies-ai-native-design-principles\u002F":351},{"id":4,"title":5,"abstract":6,"author":6,"body":7,"description":332,"excerpt":6,"extension":333,"head":6,"image":6,"keywords":334,"meta":341,"modified":6,"navigation":342,"path":343,"proficiencyLevel":344,"published":345,"rawbody":346,"schemaOrg":6,"schemaType":347,"seo":348,"stem":349,"__hash__":350},"blog\u002Fblog\u002Fai-tools-for-api-companies-ai-native-design-principles.md","AI Tools for API Companies: 4 Design Principles for AI-Native API Consumption",null,{"type":8,"value":9,"toc":325},"minimark",[10,14,24,27,30,35,38,46,52,55,58,63,66,70,78,81,84,88,91,98,164,167,170,250,253,257,260,268,271,278,281,285,292,295,298,301,304,321],[11,12,5],"h1",{"id":13},"ai-tools-for-api-companies-4-design-principles-for-ai-native-api-consumption",[15,16,17,18,23],"p",{},"In our ",[19,20,22],"a",{"href":21},"\u002Fblog\u002Fai-tools-for-api-companies-ai-needs-context\u002F","previous article",",\nwe shared how our \"obvious\" approach to building tools for AI—auto-generating a MCP server from existing documentation and OpenAPI specs—failed spectacularly.\nAfter that failure, we had to learn how to design tools specifically for AI consumption.",[15,25,26],{},"We observed a fundamental difference between human developers and AI models:\nAI models have no persistent memory across sessions.\nWhen a human first encounters your API, they might struggle with the options, but after a few uses,\nthey internalize what works best.\nAI models don't.\nEvery conversation is their first time using your API.\nFor example, a breakthrough understanding about when to use country filters versus bounding boxes is lost the moment the chat ends.\nThis fundamental limitation drives every design decision that follows.",[15,28,29],{},"In the process of overhauling our MCP server,\nwe developed four guiding principles to help build the best tools for AI as an API company.",[31,32,34],"h2",{"id":33},"tool-descriptions-are-critical","Tool Descriptions Are Critical",[15,36,37],{},"We learned that tool descriptions do the heavy lifting.\nThe description is your chance to tell the model not just what your tool does,\nbut when and why a model might want to use it.\nA good description helps the model choose the right tool for the job and understand the context where it's most useful.",[15,39,40,41,45],{},"Our ",[19,42,44],{"href":43},"\u002Fproducts\u002Fmaps\u002Fstatic-maps\u002F","Static Maps API"," demonstrates this well. Originally, we described it as:",[47,48,49],"blockquote",{},[15,50,51],{},"Generate a map with a marker at a specific location. Returns a PNG image.",[15,53,54],{},"This description was technically accurate but too narrow.\nModels would only use it when explicitly asked for \"a map with a marker,\"\nmissing opportunities where it could create useful visualizations.",[15,56,57],{},"Now we describe it as:",[47,59,60],{},[15,61,62],{},"Generate a PNG map image of an area, optionally including markers and a line (e.g. to draw a route or a boundary)",[15,64,65],{},"The improved description helps models understand the broader capabilities and context where the tool is useful.\nInstead of only thinking \"marker on map,\" they now consider it for routes, boundaries, and general area visualization.",[31,67,69],{"id":68},"context-window-management-matters","Context Window Management Matters",[15,71,72,73,77],{},"We discovered that token consumption is a hidden constraint.\nLarge tool descriptions and verbose responses quickly eat into the available,\nor perhaps more importantly, ",[74,75,76],"em",{},"usable"," context window, leaving less room for user input and driving up costs.",[15,79,80],{},"The cost of cognitive load taught us to be frugal with our tool descriptions and the number of parameters for each tool.\nEvery word needed to help the model understand when to use the tool or how to interpret the results.\nThe same applies to tool responses.\nInstead of packing responses with potentially useful but unnecessary elements,\neach element of the response should be focused on helping the model answer the question.",[15,82,83],{},"We believe this merits further investigation.\nWe want to understand how tuning descriptions for specific models could yield even better performance,\nespecially different AI models interpret tool descriptions in different ways.",[31,85,87],{"id":86},"optimize-output-formats-for-ai-interpretation","Optimize Output Formats for AI Interpretation",[15,89,90],{},"Our API responses, while great for programmatic consumption, needed rethinking for AI consumption.\nInstead of just returning raw JSON, we started designing responses that help models interpret and use the data effectively.\nSometimes this meant adding human-readable summaries alongside or instead of structured data.\nOther times it meant restructuring the response to highlight the most relevant information.\nThe goal was always to reduce the cognitive load on the model while preserving the essential information.",[15,92,40,93,97],{},[19,94,96],{"href":95},"\u002Fproducts\u002Fgeospatial-apis\u002F","Timezone API"," illustrates this perfectly.\nOriginally, we returned the technical data that developers needed:",[99,100,105],"pre",{"className":101,"code":102,"language":103,"meta":104,"style":104},"language-json shiki shiki-themes github-light","{\n  \"tz_id\": \"Europe\u002FZurich\",\n  \"base_utc_offset\": 3600,\n  \"dst_offset\": 3600\n}\n","json","",[106,107,108,117,134,147,158],"code",{"__ignoreMap":104},[109,110,113],"span",{"class":111,"line":112},"line",1,[109,114,116],{"class":115},"sgsFI","{\n",[109,118,120,124,127,131],{"class":111,"line":119},2,[109,121,123],{"class":122},"sYu0t","  \"tz_id\"",[109,125,126],{"class":115},": ",[109,128,130],{"class":129},"sYBdl","\"Europe\u002FZurich\"",[109,132,133],{"class":115},",\n",[109,135,137,140,142,145],{"class":111,"line":136},3,[109,138,139],{"class":122},"  \"base_utc_offset\"",[109,141,126],{"class":115},[109,143,144],{"class":122},"3600",[109,146,133],{"class":115},[109,148,150,153,155],{"class":111,"line":149},4,[109,151,152],{"class":122},"  \"dst_offset\"",[109,154,126],{"class":115},[109,156,157],{"class":122},"3600\n",[109,159,161],{"class":111,"line":160},5,[109,162,163],{"class":115},"}\n",[15,165,166],{},"This worked fine for developers who could calculate the current local time themselves.\nBut AI models struggled with a surprising limitation: they don't inherently know what time it is right now.\nWhen someone asked \"Is this restaurant in Zurich open now?\",\nmodels couldn't reliably determine the current local time from just the timezone offset data.",[15,168,169],{},"So we added the local timestamps to our response:",[99,171,173],{"className":101,"code":172,"language":103,"meta":104,"style":104},"{\n  \"tz_id\": \"Europe\u002FZurich\",\n  \"base_utc_offset\": 3600,\n  \"dst_offset\": 3600,\n  \"timestamp\": 1749479378,\n  \"local_rfc_2822_timestamp\": \"Mon, 9 Jun 2025 16:29:38 +0200\",\n  \"local_rfc_3389_timestamp\": \"2025-06-09T16:29:38+02:00\"\n}\n",[106,174,175,179,189,199,209,221,234,245],{"__ignoreMap":104},[109,176,177],{"class":111,"line":112},[109,178,116],{"class":115},[109,180,181,183,185,187],{"class":111,"line":119},[109,182,123],{"class":122},[109,184,126],{"class":115},[109,186,130],{"class":129},[109,188,133],{"class":115},[109,190,191,193,195,197],{"class":111,"line":136},[109,192,139],{"class":122},[109,194,126],{"class":115},[109,196,144],{"class":122},[109,198,133],{"class":115},[109,200,201,203,205,207],{"class":111,"line":149},[109,202,152],{"class":122},[109,204,126],{"class":115},[109,206,144],{"class":122},[109,208,133],{"class":115},[109,210,211,214,216,219],{"class":111,"line":160},[109,212,213],{"class":122},"  \"timestamp\"",[109,215,126],{"class":115},[109,217,218],{"class":122},"1749479378",[109,220,133],{"class":115},[109,222,224,227,229,232],{"class":111,"line":223},6,[109,225,226],{"class":122},"  \"local_rfc_2822_timestamp\"",[109,228,126],{"class":115},[109,230,231],{"class":129},"\"Mon, 9 Jun 2025 16:29:38 +0200\"",[109,233,133],{"class":115},[109,235,237,240,242],{"class":111,"line":236},7,[109,238,239],{"class":122},"  \"local_rfc_3389_timestamp\"",[109,241,126],{"class":115},[109,243,244],{"class":129},"\"2025-06-09T16:29:38+02:00\"\n",[109,246,248],{"class":111,"line":247},8,[109,249,163],{"class":115},[15,251,252],{},"Now AI models can immediately understand both the current local time and work with the format they parse most naturally.\nThis change also made the API more useful for human developers who previously had to calculate local time themselves.",[31,254,256],{"id":255},"split-complex-endpoints-into-focused-tools","Split Complex Endpoints into Focused Tools",[15,258,259],{},"Our biggest breakthrough came from abandoning the one-tool-per-endpoint approach.\nInstead of exposing our every endpoint as a tool with many parameters,\nwe learned to split them into focused, use-case-specific tools,\noften removing complexity to ensure models can effectively use each tool.",[15,261,262,263,267],{},"Take our ",[19,264,266],{"href":265},"\u002Fproducts\u002Fgeocoding-search\u002Fgeocoding\u002F","geocoding"," tool as an example.\nOriginally, we exposed all the filtering options available in our API—bounding box coordinates,\ncircular search areas, layer filters, and country restrictions.\nWe thought giving AI models more control would lead to better results.",[15,269,270],{},"In practice, the models never used most of these options, even when they should have.\nWhen they did try to use them, they often got confused about which filter was most appropriate and picked sub-optimal choices.\nA request to \"find coffee shops in downtown Seoul\" might trigger attempts to calculate bounding box coordinates rather than simply using the country filter.",[15,272,273,274,277],{},"We eventually eliminated everything except the country filter.\nWe kept this one because it was the simplest—just a 3-character ISO code rather than coordinate lists for bounding boxes or radius calculations.\nIt also happens that most models can figure out country codes quite easily from their pre-trained general knowledge,\nturning \"find addresses in South Korea\" into a simple ",[106,275,276],{},"country: \"KOR\""," parameter.",[15,279,280],{},"The key insight: AI models perform better with multiple simple tools than one complex tool.\nThey can chain together focused tools to solve complex problems,\nbut they struggle to navigate a tool with dozens of configuration options.\nAs our tool usage grows, we expect to iteratively add new,\nfocused tools based on our existing endpoints to better help models answer location questions.",[31,282,284],{"id":283},"from-spectacular-failure-to-working-tools","From Spectacular Failure to Working Tools",[15,286,287,288,291],{},"These four principles transformed our MCP server:\ncontextual tool descriptions, focused endpoint splitting,\nAI-optimized responses, and careful token management.\nAs we discovered them, the spectacular failure we described in our ",[19,289,290],{"href":21},"first article"," became something that actually works.\nAI models can now successfully help users find routes, geocode addresses, and interact with our location services in natural, intuitive ways.",[15,293,294],{},"But mastering the technical aspects of tool design was just the beginning.\nThe real surprises came from what we learned about our own APIs in the process,\nand the unexpected capabilities that emerged when AI agents started orchestrating our tools in ways we'd never anticipated.",[15,296,297],{},"These insights go far beyond tool design and touch on fundamental questions about how API companies should think about AI-native development,\ndeveloper experience, and business strategy in an agent-driven world.",[15,299,300],{},"In our next article,\nwe'll explore how building our MCP server became a mirror that revealed opportunities to improve our underlying APIs,\nand how it unlocked user workflows we never had to build ourselves.\nThe technical lessons covered here are essential,\nbut they're just the foundation for understanding what AI-native API consumption really means.",[15,302,303],{},"For now, if you're building your own tools for AI,\nstart with these principles and prepare to be surprised by what you learn about your own APIs in the process.",[15,305,306,307,314,315,320],{},"Want to see these principles in action?\nCheck out our ",[19,308,313],{"href":309,"rel":310,"target":312},"https:\u002F\u002Fgithub.com\u002Fstadiamaps\u002Fstadiamaps-mcp-server-ts",[311],"external","_blank","MCP Server on GitHub"," and join our ",[19,316,319],{"href":317,"rel":318,"target":312},"https:\u002F\u002Fdiscord.gg\u002FqRBy6qqtdT",[311],"Discord"," to discuss your own experiences with other developers tackling these same challenges.",[322,323,324],"style",{},"html pre.shiki code .sgsFI, html code.shiki .sgsFI{--shiki-default:#24292E}html pre.shiki code .sYu0t, html code.shiki .sYu0t{--shiki-default:#005CC5}html pre.shiki code .sYBdl, html code.shiki .sYBdl{--shiki-default:#032F62}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"title":104,"searchDepth":149,"depth":149,"links":326},[327,328,329,330,331],{"id":33,"depth":119,"text":34},{"id":68,"depth":119,"text":69},{"id":86,"depth":119,"text":87},{"id":255,"depth":119,"text":256},{"id":283,"depth":119,"text":284},"The four design principles we discovered for building tools for AI that models actually use and understand. This is the second post in our series about AI tools for API companies.","md",[335,336,337,338,339,340],"AI","MCP","Model Context Protocol","API Design","AI Agents","Tool Design",{},true,"\u002Fblog\u002Fai-tools-for-api-companies-ai-native-design-principles","Expert","2025-08-07","---\ndescription: >-\n  The four design principles we discovered for building tools for AI that models actually use and understand.\n  This is the second post in our series about AI tools for API companies.\npublished: \"2025-08-07\"\nkeywords:\n  - AI\n  - MCP\n  - Model Context Protocol\n  - API Design\n  - AI Agents\n  - Tool Design\nschemaType: TechArticle\nproficiencyLevel: Expert\n---\n\n# AI Tools for API Companies: 4 Design Principles for AI-Native API Consumption\n\nIn our [previous article](\u002Fblog\u002Fai-tools-for-api-companies-ai-needs-context\u002F), \nwe shared how our \"obvious\" approach to building tools for AI—auto-generating a MCP server from existing documentation and OpenAPI specs—failed spectacularly. \nAfter that failure, we had to learn how to design tools specifically for AI consumption.\n\nWe observed a fundamental difference between human developers and AI models: \nAI models have no persistent memory across sessions. \nWhen a human first encounters your API, they might struggle with the options, but after a few uses, \nthey internalize what works best. \nAI models don't. \nEvery conversation is their first time using your API.\nFor example, a breakthrough understanding about when to use country filters versus bounding boxes is lost the moment the chat ends. \nThis fundamental limitation drives every design decision that follows.\n\nIn the process of overhauling our MCP server, \nwe developed four guiding principles to help build the best tools for AI as an API company.\n\n## Tool Descriptions Are Critical\n\nWe learned that tool descriptions do the heavy lifting.\nThe description is your chance to tell the model not just what your tool does, \nbut when and why a model might want to use it. \nA good description helps the model choose the right tool for the job and understand the context where it's most useful.\n\nOur [Static Maps API](\u002Fproducts\u002Fmaps\u002Fstatic-maps\u002F) demonstrates this well. Originally, we described it as:\n\n> Generate a map with a marker at a specific location. Returns a PNG image.\n\nThis description was technically accurate but too narrow.\nModels would only use it when explicitly asked for \"a map with a marker,\" \nmissing opportunities where it could create useful visualizations.\n\nNow we describe it as:\n\n> Generate a PNG map image of an area, optionally including markers and a line (e.g. to draw a route or a boundary)\n\nThe improved description helps models understand the broader capabilities and context where the tool is useful. \nInstead of only thinking \"marker on map,\" they now consider it for routes, boundaries, and general area visualization.\n\n## Context Window Management Matters\n\nWe discovered that token consumption is a hidden constraint. \nLarge tool descriptions and verbose responses quickly eat into the available, \nor perhaps more importantly, *usable* context window, leaving less room for user input and driving up costs.\n\nThe cost of cognitive load taught us to be frugal with our tool descriptions and the number of parameters for each tool. \nEvery word needed to help the model understand when to use the tool or how to interpret the results. \nThe same applies to tool responses. \nInstead of packing responses with potentially useful but unnecessary elements, \neach element of the response should be focused on helping the model answer the question.\n\nWe believe this merits further investigation.\nWe want to understand how tuning descriptions for specific models could yield even better performance, \nespecially different AI models interpret tool descriptions in different ways.\n\n## Optimize Output Formats for AI Interpretation\n\nOur API responses, while great for programmatic consumption, needed rethinking for AI consumption. \nInstead of just returning raw JSON, we started designing responses that help models interpret and use the data effectively. \nSometimes this meant adding human-readable summaries alongside or instead of structured data. \nOther times it meant restructuring the response to highlight the most relevant information. \nThe goal was always to reduce the cognitive load on the model while preserving the essential information.\n\nOur [Timezone API](\u002Fproducts\u002Fgeospatial-apis\u002F) illustrates this perfectly. \nOriginally, we returned the technical data that developers needed:\n\n```json\n{\n  \"tz_id\": \"Europe\u002FZurich\",\n  \"base_utc_offset\": 3600,\n  \"dst_offset\": 3600\n}\n```\n\nThis worked fine for developers who could calculate the current local time themselves. \nBut AI models struggled with a surprising limitation: they don't inherently know what time it is right now. \nWhen someone asked \"Is this restaurant in Zurich open now?\", \nmodels couldn't reliably determine the current local time from just the timezone offset data.\n\nSo we added the local timestamps to our response:\n\n```json\n{\n  \"tz_id\": \"Europe\u002FZurich\",\n  \"base_utc_offset\": 3600,\n  \"dst_offset\": 3600,\n  \"timestamp\": 1749479378,\n  \"local_rfc_2822_timestamp\": \"Mon, 9 Jun 2025 16:29:38 +0200\",\n  \"local_rfc_3389_timestamp\": \"2025-06-09T16:29:38+02:00\"\n}\n```\n\nNow AI models can immediately understand both the current local time and work with the format they parse most naturally. \nThis change also made the API more useful for human developers who previously had to calculate local time themselves.\n\n## Split Complex Endpoints into Focused Tools\n\nOur biggest breakthrough came from abandoning the one-tool-per-endpoint approach. \nInstead of exposing our every endpoint as a tool with many parameters, \nwe learned to split them into focused, use-case-specific tools, \noften removing complexity to ensure models can effectively use each tool.\n\nTake our [geocoding](\u002Fproducts\u002Fgeocoding-search\u002Fgeocoding\u002F) tool as an example. \nOriginally, we exposed all the filtering options available in our API—bounding box coordinates, \ncircular search areas, layer filters, and country restrictions. \nWe thought giving AI models more control would lead to better results.\n\nIn practice, the models never used most of these options, even when they should have. \nWhen they did try to use them, they often got confused about which filter was most appropriate and picked sub-optimal choices. \nA request to \"find coffee shops in downtown Seoul\" might trigger attempts to calculate bounding box coordinates rather than simply using the country filter.\n\nWe eventually eliminated everything except the country filter. \nWe kept this one because it was the simplest—just a 3-character ISO code rather than coordinate lists for bounding boxes or radius calculations. \nIt also happens that most models can figure out country codes quite easily from their pre-trained general knowledge, \nturning \"find addresses in South Korea\" into a simple `country: \"KOR\"` parameter.\n\nThe key insight: AI models perform better with multiple simple tools than one complex tool. \nThey can chain together focused tools to solve complex problems, \nbut they struggle to navigate a tool with dozens of configuration options. \nAs our tool usage grows, we expect to iteratively add new, \nfocused tools based on our existing endpoints to better help models answer location questions.\n\n## From Spectacular Failure to Working Tools\n\nThese four principles transformed our MCP server:\ncontextual tool descriptions, focused endpoint splitting, \nAI-optimized responses, and careful token management.\nAs we discovered them, the spectacular failure we described in our [first article](\u002Fblog\u002Fai-tools-for-api-companies-ai-needs-context\u002F) became something that actually works.\nAI models can now successfully help users find routes, geocode addresses, and interact with our location services in natural, intuitive ways.\n\nBut mastering the technical aspects of tool design was just the beginning. \nThe real surprises came from what we learned about our own APIs in the process, \nand the unexpected capabilities that emerged when AI agents started orchestrating our tools in ways we'd never anticipated.\n\nThese insights go far beyond tool design and touch on fundamental questions about how API companies should think about AI-native development, \ndeveloper experience, and business strategy in an agent-driven world.\n\nIn our next article, \nwe'll explore how building our MCP server became a mirror that revealed opportunities to improve our underlying APIs, \nand how it unlocked user workflows we never had to build ourselves. \nThe technical lessons covered here are essential, \nbut they're just the foundation for understanding what AI-native API consumption really means.\n\nFor now, if you're building your own tools for AI, \nstart with these principles and prepare to be surprised by what you learn about your own APIs in the process.\n\nWant to see these principles in action? \nCheck out our [MCP Server on GitHub](https:\u002F\u002Fgithub.com\u002Fstadiamaps\u002Fstadiamaps-mcp-server-ts) and join our [Discord](https:\u002F\u002Fdiscord.gg\u002FqRBy6qqtdT) to discuss your own experiences with other developers tackling these same challenges.","TechArticle",{"title":5,"description":332},"blog\u002Fai-tools-for-api-companies-ai-native-design-principles","IxBvXvWaaZGfq92DBYQuHkB0on8eQYE8R19Nsl7c7yY",[352,359,373],{"title":353,"description":354,"path":355,"published":356,"keywords":357,"rawbody":358},"AI Tools for API Companies: AI Needs Context, or How Our Auto-Generation Failed Spectacularly","Why our obvious approach to building MCP servers failed spectacularly,  and what we learned about AI-native API consumption. This is the first post in our series about AI tools for API companies.","\u002Fblog\u002Fai-tools-for-api-companies-ai-needs-context","2025-08-01",[335,336,337,338,339],"---\ndescription: >-\n  Why our obvious approach to building MCP servers failed spectacularly, \n  and what we learned about AI-native API consumption.\n  This is the first post in our series about AI tools for API companies.\npublished: \"2025-08-01\"\nkeywords:\n  - AI\n  - MCP\n  - Model Context Protocol\n  - API Design\n  - AI Agents\nschemaType: TechArticle\nproficiencyLevel: Expert\n---\n\n# AI Tools for API Companies: AI Needs Context, or How Our Auto-Generation Failed Spectacularly\n\nThe AI agent revolution is here, and with it, everyone's talking about agent tools.\nAs a location API company, we've spent years perfecting location tools for human developers.\nSo when the [Model Context Protocol (MCP)](https:\u002F\u002Fmodelcontextprotocol.io\u002F) promised to make our APIs accessible to AI agents, \nwe figured it would be straightforward.\nJust auto-generate a MCP server with a healthy selection of tools like we do with SDKs, right?\n\nWrong. \nOur initial attempts failed spectacularly, \nand we learned some unintuitive lessons: \nthe biggest of which is that the context in which AI consume APIs matters enormously.\n\n## What Are AI tools?\n\nFundamentally, tools are how language models interact with external systems.\nNormally, when a user asks an AI assistant to \"find the best route from Seoul to Busan,\" \nthe model can't give anything more than a vague summary.\nWith the right tools, \nhowever, AI can use APIs to lookup addresses, provide real-time directions, and even map the resulting path.\n\nThe Model Context Protocol standardizes how AI models discover and interact with these tools.\nMCP servers act as bridges between AI systems and external systems, \ndefining a consistent interface for tool discovery, parameter specification, and response handling. \nThey're gaining traction because they solve a critical infrastructure problem: \nhow to reliably connect AI agents to the vast ecosystem of existing APIs.\n\nAs engineers, this looked a lot like a pattern we already knew: SDKs for developers.\n\n## The \"Obvious\" Solution\n\nGiven we'd already spent years building solid SDKs generated from hand-crafted OpenAPI specifications, \nit seemed logical to start there. \nFrom this foundation, we auto-generated our first MCP server.\n\nFrom start to finish, \nthe whole process took a couple hours. \nWe had a working MCP server, complete with tools for geocoding, routing, and creating maps. \nIt seemed perfect.\n\nThen we tried our prompt:\n\n> Find the best route from Seoul to Busan.\n\n## What Happened\n\nImmediately, our quick win began showing its limits. \nThe models were using our tools in subtly but fundamentally flawed ways, and that’s the worst kind of failure.\n\nLocation APIs, even in the simplest form, are complex. \nMost endpoints offer dozens of inputs and parameters. \nFor humans, we solve this by offering clear documentation and guides. \nUnfortunately, this complexity quickly confused models. \nIn the maze of options, they couldn't figure out which parameters mattered. \nWhen finding a route should it optimize for time or distance? \nWhat about tolls? \nShould it choose car, bike, truck, some other mode of transportation? \nThis was further compounded by models not even knowing which questions to ask to feed the tool the right parameters.\n\nEven when models did choose reasonable parameters, they struggled with complex responses. \nOur APIs return detailed JSON objects optimized for programmatic consumption, \nbut AI models often missed crucial information or got overwhelmed by unnecessary details.\n\nToken consumption quickly became problematic. \nOur mega-tools with extensive parameter lists consumed large amounts of context space, \nleaving less room for user input and driving up model costs and, more importantly, efficiency. \nA single tool description could easily eat 500+ tokens before the model even chose which tool it should use.\n\nThe pattern was clear across every endpoint we tested: \nAPIs designed for human developers, no matter how well-crafted, don't automatically translate to effective AI tools. \nOur fundamental assumption—that well-built, well-documented APIs automatically make good AI tools—was completely wrong.\n\n## What We Were Actually Building\n\nWe had to take a step back and consider what we were actually trying to do.\n\nWhen we started, we assumed we were building another SDK, but we weren't. \nWe were building something fundamentally different: contextual tools specifically structured for AIs. \nThe key insight we missed initially is that AI models consume APIs through a completely different lens than human developers.\n\nHuman developers read documentation, understand business logic, and make nuanced decisions about parameter selection.\nAI models need tools designed specifically for their consumption patterns: \nfocused, well-described, and optimized for the constraints of language model reasoning.\n\n## What's Next\n\nThis failure taught us much of what we needed to know about building MCP servers that actually work. \nAfter our auto-generated approach failed, we had to retool our entire strategy.\n\nIn our [next article](\u002Fblog\u002Fai-tools-for-api-companies-ai-native-design-principles\u002F), \nwe'll share the design principles we discovered for creating AI-friendly tools, \nthe unexpected ways building MCPs improved our underlying APIs, \nand the new capabilities this unlocked that we never had to build ourselves.\n\nFor now, if you're looking at MCPs for your own API company, remember this: \nthe obvious solution rarely works when you're building for AI consumption. \nThe patterns that make APIs great for human developers need fundamental rethinking for AI agents.\n\nWant to stay ahead of the AI tooling curve? \n[Subscribe to our newsletter](https:\u002F\u002Fmailchi.mp\u002Fstadiamaps\u002Fmcp-leads-landing) for insights on building APIs that work seamlessly with AI agents, \nMCP server best practices, and the evolving landscape of AI-native development.\n\nIf you're impatient and want to see our principles in action right now, \ncheck out our [MCP Server on GitHub](https:\u002F\u002Fgithub.com\u002Fstadiamaps\u002Fstadiamaps-mcp-server-ts). \nAnd join our [Discord](https:\u002F\u002Fdiscord.gg\u002FqRBy6qqtdT) to discuss your own MCP experiences with other developers tackling these same challenges.",{"title":360,"description":361,"path":362,"published":363,"keywords":364,"rawbody":372},"Why Basic OpenStreetMap Routing Needs Real-Time Traffic","OpenStreetMap is a world-class road network, but without real-time traffic it's a static dataset. Here's why algorithmic ETAs fall apart in production logistics and how Stadia Maps closes the gap with TomTom-powered routing.","\u002Fblog\u002Fwhy-osm-routing-needs-real-time-traffic","2026-05-12",[365,366,367,368,369,370,371],"Routing","Navigation","OpenStreetMap","Traffic Data","Matrix Routing","Logistics","TomTom","---\ndescription: >-\n  OpenStreetMap is a world-class road network, but without real-time traffic\n  it's a static dataset. Here's why algorithmic ETAs fall apart in production\n  logistics and how Stadia Maps closes the gap with TomTom-powered routing.\nexcerpt: >-\n  OpenStreetMap is great geography, but without real-time traffic it falls\n  short on ETAs. Stadia Maps closes the gap with TomTom-powered routing.\npublished: \"2026-05-12\"\nkeywords:\n  - Routing\n  - Navigation\n  - OpenStreetMap\n  - Traffic Data\n  - Matrix Routing\n  - Logistics\n  - TomTom\nauthor:\n  name: \"Ian Wagner\"\n  jobTitle: \"Founder & President \u002F COO\"\n  sameAs:\n    - \"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fian-w-wagner\u002F\"\n---\n\n# Why Basic OpenStreetMap Routing Needs Real-Time Traffic\n\n> OpenStreetMap (OSM) provides a world-class geographic foundation, but it remains a static dataset. Without real-time traffic integration, routing engines must rely on algorithmic proxies—like road class and legal speed limits—which often lead to unreliable ETAs and logistics bottlenecks.\n\n## The Problem\n\n[OpenStreetMap (OSM)](https:\u002F\u002Fwww.openstreetmap.org\u002Fabout) is one of the world's leading road maps, but a persistent gap remains between fixed geographic data and a [live navigation experience](\u002Fproducts\u002Frouting-navigation\u002F). Without dedicated traffic data, Estimated Times of Arrival (ETAs) are essentially educated guesses. While OSM is excellent at mapping the world's road network, a static dataset cannot capture the actual driving conditions at this exact moment. In enterprise-grade logistics, the lack of live data is often the first significant technical hurdle.\n\n## The Limits of Algorithmic Guesswork\n\nIn the absence of real-time data, a routing engine must estimate travel speeds based on tags and a few common proxies:\n\n- **Road Class:** Assuming a motorway is always faster than a residential street.\n- **Tagged Speed Limits:** Using the legal maximum as the baseline (when the tag even exists).\n- **Network Density:** Adjusting for urban vs. rural environments.\n- **Time of Day:** Using low-granularity buckets like \"daytime\" and \"nighttime.\"\n\nReal-world data show wild variances compared to these static estimates. Road class is a blunt instrument for predicting speed. Missing speed limit tags in open datasets force routing engines to rely on broad averages, resulting in unreliable ETAs and logistics delays. Rule-based algorithms are also notoriously bad at predicting choke points because open datasets don't account for traffic light timings, congestion near specific exits, or the \"invisible\" friction of a busy intersection.\n\n## The Stadia Maps Difference\n\nTo move from guesswork to precision, we integrated [TomTom's global traffic data](https:\u002F\u002Fwww.tomtom.com\u002Fproducts\u002Ftraffic-apis\u002F) directly into the [Stadia Maps routing engine](https:\u002F\u002Fdocs.stadiamaps.com\u002Frouting\u002F). High-resolution historical profiles and live feeds allow for accurate, real-time routing. We provide this through three key technical pillars:\n\n1. **Global Coverage:** Access to consistent data across more countries than almost any other vendor.\n2. **Rapid Updates:** A traffic latency of approximately two minutes allows our API to suggest alternate routes almost as soon as a wreck occurs.\n3. **Historical Profiles:** Deep granularity forms the backbone of predictive routing. High-resolution historical data enables accurate, time-dependent routing in advance, allowing you to plan a route for Tuesday at 8:00 AM based on what might happen on Tuesdays at 8:00 AM.\n\n## Fleet Intelligence at Scale\n\nFor dispatch, optimization, and fleet operations, [matrix routing](https:\u002F\u002Fdocs.stadiamaps.com\u002Frouting\u002Ftime-distance-matrix\u002F) (calculating the time and distance between many origins and destinations) is the engine's most critical function.\n\nThe Stadia Maps infrastructure supports matrix requests that are significantly larger than most competitors allow on standard plans. By integrating traffic data directly into these large-scale requests, we eliminate the need for developers to split requests into smaller chunks, reducing unnecessary complexity and latency.\n\nDevelopers maintain full agency over their implementation. We provide the fastest route based on live conditions, but the frequency of re-routing remains entirely in your control. Choice of revalidation frequency puts you in charge of the trade-off between real-time accuracy and [scaling costs](\u002Fpricing\u002F), ensuring your bills remain as predictable as your ETAs.\n\n---\n\n[Create a free account](https:\u002F\u002Fclient.stadiamaps.com\u002Fsignup\u002F) to start building with real-time traffic and high-performance routing today. Our [documentation](https:\u002F\u002Fdocs.stadiamaps.com\u002Frouting\u002F) provides everything you need to integrate TomTom-powered precision into your existing OSM workflow.\n",{"title":374,"description":375,"path":376,"published":377,"keywords":378,"rawbody":384},"2026 Satellite Imagery Update: 37 Million km² at 30cm Resolution","The 2026 Alidade Satellite update expands 30cm-resolution coverage to 37 million km², adds seamless country-wide mosaics for Japan, Nigeria, Mexico, the UAE, and Eastern South Africa, and refreshes our global 1.5m baseline from the latest SPOT data.","\u002Fblog\u002F2026-satellite-imagery-update","2026-04-27",[379,380,381,382,383],"Satellite Imagery","Aerial Photography","Map Update","High Resolution","Alidade Satellite","---\ndescription: >-\n  The 2026 Alidade Satellite update expands 30cm-resolution coverage to 37 million km²,\n  adds seamless country-wide mosaics for Japan, Nigeria, Mexico, the UAE, and Eastern\n  South Africa, and refreshes our global 1.5m baseline from the latest SPOT data.\npublished: 2026-04-27\nkeywords:\n  - Satellite Imagery\n  - Aerial Photography\n  - Map Update\n  - High Resolution\n  - Alidade Satellite\n---\n\n# 2026 Satellite Imagery Update: 37 Million km² at 30cm Resolution\n\nIf you've built anything on top of satellite imagery, you know the pain of inconsistent resolution. You zoom into one region and get crisp rooftops. Pan over to the next and it's a blurry patchwork from three years ago. That inconsistency isn't just cosmetic: it erodes trust in whatever you're building on top of it.\n\nWe regularly refresh our [Alidade Satellite](https:\u002F\u002Fstadiamaps.com\u002Fproducts\u002Fmaps\u002Fmap-styles\u002Fsatellite-imagery\u002F) imagery as new high-resolution data becomes available from Airbus. This update is one of our most significant, expanding both the depth and freshness of our coverage.\n\n::cross-platform-map{id=\"map\" style=\"height: 400px;\"}\n---\ncenter: [139.6934, 35.6857]\nscroll-zoom: true\nzoom: 16.5\ntheme: alidade_satellite\nuse-theme-switcher: false\nuse-search: true\n---\n::\n\n## 30cm Coverage, Scaled\n\nWe now offer 37 million km² of 30cm-resolution satellite imagery, enough detail to distinguish individual vehicles, building footprints, and infrastructure at high zoom levels. For applications like urban planning tools, insurance assessments, or logistics platforms, this is the difference between useful and decorative.\n\nThis release also adds seamless 30cm country-wide mosaics for Japan, Nigeria, Mexico, the UAE, and Eastern South Africa. \"Seamless\" matters here: no visible tile boundaries, no abrupt shifts in color or season. Just consistent, high-resolution coverage across the entire country.\n\n## A Fresher Global Baseline\n\nBeyond the 30cm expansion, we've completed a full refresh of our 1.5m-resolution dataset covering the Earth's landmasses, derived from the latest SPOT Global layer. Even at lower zoom levels, you're working with current data rather than imagery that's aging out.\n\nFreshness matters as much as resolution. Across our entire dataset, the area-weighted average age is roughly 1.6 years. Nearly two-thirds of our coverage is less than a year old, and only 7% is older than three years. That share continues to shrink with each refresh.\n\nCombined with our [2025 satellite imagery refresh](https:\u002F\u002Fstadiamaps.com\u002Fblog\u002F2025-satellite-imagery-refresh\u002F), every pixel in our dataset is still 1.5m or better, with 37 million km² at 30cm and another 7 million km² at 50cm.\n\n## What This Means for Your Stack\n\nIf you're using Alidade Satellite, these updates are already live. No API changes, no migration. The same tile endpoints now serve fresher, sharper data. Integration works the same way it always has via MapLibre, Leaflet, OpenLayers, or any other mapping library that supports raster tiles.\n\nWe don't track or profile your end users. The imagery is delivered directly, with no behavioral tracking layer between your application and the tiles.\n\n## Try It\n\nThe updated satellite imagery is available now for all Stadia Maps customers. If you're new, [create a free account](https:\u002F\u002Fclient.stadiamaps.com\u002Fsignup\u002F) and see the difference at zoom level 18.\n",1778676026619]