North · A working strategy doc
What I'm building next, and why.
Six MCPs on npm. 2,500 monthly downloads. Seven years designing. The next 2–6 months convert that surface into recognition that pays. This is the map.
00 / ForewordWhy this exists
A living document of what's worth building next, with the reasoning naked.
I asked myself one question. What do I build over the next two to six months that converts six shipped MCPs and seven years of design work into either a senior product design role at an AI-native company, or a practice that can pay Tokyo rent on its own terms?
What follows is the honest answer. Three top bets, ten strong picks, five wild cards, and a parallel track of institutional moves. Each one is verified against the current ecosystem, named where the name is open, and time-estimated against my actual capacity.
The frame is recognition. Not stars. Not downloads. Not generic followers. Recognition meaning: the people who decide who Google Japan, Anthropic, Cursor, Linear, Cognition, Sakana, or a serious gaishikei design org hire next, encounter my work and form an opinion. That is the only metric I am optimizing.
If you're reading this from outside my head, the honest invitation is to disagree. Push back, send me your own list, or tell me which bet you'd take instead. marselbait@gmail.com.
01 / The frameWhat gets recognition right now
Three surfaces compound: building, writing, speaking. The agentic shift is the unlock. Tokyo and Japan are still a moat.
The recognition surfaces have not changed in fifteen years, but their relative weight has. Right now, in the agentic era, they rank in this order:
- Public shipped work. Real artifacts that other people install, fork, or render. npm packages, GitHub repos, deployed demos. Not portfolio screenshots. Distribution is the credential.
- Writing with a thesis. Long-form essays that take a position about where AI products are going, supported by demos. Not opinions. Claims with receipts.
- Speaking at the conferences hiring managers attend. DevFest Tokyo, Config (Figma), Anthropic and Google events, regional Tokyo dev meetups. A talk is the highest-leverage hour you can spend, because the room is pre-filtered.
All three compound. A talk gets remembered when the talker also has shipped work and a written thesis. An essay gets shared when the author ships the artifact alongside it. Shipped work without writing or speaking is just code.
The agentic-era angle
Most product designers are still designing for humans. The shift in 2025–2026 is that an increasing fraction of UI is now consumed by AI agents directly, not humans. That is the frame I'm betting on, and it is consistent with my portfolio voice ("Building for the era when the user isn't always human"). Every project below ladders to this thesis or to a Japan-specific extension of it.
Why Japan still matters as a moat
The temptation, when chasing recognition from US-headquartered companies, is to abandon Japan as too provincial. That's wrong. Japan-specific UX expertise is a moat precisely because it's hard to fake remotely. Every Western designer hired into Anthropic or Google can do English forms. Almost none of them can do 姓/名 order, furigana, the qualified invoice system, or the ten politeness levels of business keigo. Combining "designer who ships" with "actually understands Japan" is a category of one or two people, not a hundred. The leverage is in the combination.
02 / Top betsThe three I'd start tomorrow
Time-sensitive, ladder-fitting, and within my actual capacity. I do these three first; everything else stacks behind.
A2UI Japan A localized A2UI showcase nobody else can credibly build.
Status
Original. No competitor.
Effort
10–14 days, focused.
Visibility ceiling
Featured by Google DevRel.
Time window
12–18 months before saturation.
A2UI is Google's open-source standard for agent-driven generative UI, released December 2025 and updated to version 0.9 in spring 2026. The community now has a Gallery, a Composer, official samples like RizzCharts, and a few vertical apps. What it does not have is any showcase built for a non-Western locale. Every example assumes English forms, Western name order, US date formats, and Latin-script-only rendering.
Japanese forms are the obvious gap. The official Restaurant Finder example would not survive contact with a Japanese user. Family name comes first. Furigana fields are required. Phone numbers split into three components. Postal codes auto-fill addresses. Date entry uses 年月日. Casual keigo for consumer apps, formal for B2B. None of this is hard to know. All of it is invisible to Western teams.
I already shipped japan-ux-mcp, the only MCP that encodes these conventions as a tool surface. The bet is to compose A2UI + japan-ux-mcp into a public showcase that demonstrates what generative UI looks like when the agent is generating UI for a Japanese user. The deliverable is not abstract: three to five concrete A2UI flows (registration form, hotel booking, contact form, refund request, business inquiry) rendered as live demos with the agent generating culturally-correct output in real time.
Build plan
- Day 1–2. Read A2UI v0.9 spec end-to-end. Set up React renderer locally. Run the official Restaurant Finder sample. Confirm the rendering pipeline works.
- Day 3–4. Wire japan-ux-mcp into the agent's tool surface. Validate that asking for "a Japanese registration form" produces correct A2UI output (姓/名 split, furigana, postal flow).
- Day 5–8. Design and implement five flows. Each one has a "before" (default A2UI generation) and an "after" (with japan-ux-mcp wired in). The contrast is the proof.
- Day 9–10. Build the showcase site itself. Single-page, dark editorial design, matching this document's voice. Each flow is a live, interactive example.
- Day 11–12. Write the accompanying essay. Title candidate: "What A2UI looks like when the user is Japanese." Publish to Medium under Google Cloud Community tag, cross-post to dev.to and personal site.
- Day 13–14. Submit a pull request to
google/A2UIadding the japan-ux examples to their official sample list. Ping CopilotKit's gallery. Tag@GoogleDevs,@GoogleAI, and the Opal team on X.
Launch plan
The artifact is the entry point but the distribution is the work. Three coordinated waves. The X thread the day of launch (technical screenshots, before/after gifs, japan-ux-mcp callouts). The Medium essay the same week. A submitted lightning talk to the next Tokyo Google Developer Group meetup. A direct email to two Google DevRel contacts I can identify on LinkedIn.
What kills this
The single risk is that Google's own A2UI team ships a localization story before I do. They probably will, eventually. But they will ship for fifteen languages at once, with whatever Google's i18n team produces. A culturally-detailed Japanese showcase, designer-led, will still stand because it's specific and credibly opinionated in a way a sixteen-locale rollout cannot be. The window is the next two to six months.
Why I'd start this one tomorrow. It compounds my existing portfolio (japan-ux-mcp gets a flagship demo), it lands on Google's exact hot surface (A2UI), and it cannot be credibly cloned by a Western designer in the time it takes them to learn what furigana is.
konbini-mcp An MCP for Japan's convenience-store APIs. Distinct, useful, viral.
Status
No equivalent shipped.
Effort
5–7 days.
Visibility ceiling
Press attention, viral X thread.
Name
Available on npm, GitHub.
7-Eleven, Lawson, and FamilyMart between them have somewhere over fifty thousand stores in Japan. Each chain operates a small empire of services beyond the obvious: in-store printing, ATM withdrawals, ticket pickups, package shipping (takkyubin), bill payments, ID-photo machines. These are public-facing services, not enterprise APIs, but every chain exposes some structured data: store finders, product availability, photo-print queues. None of it is in any MCP I've found.
The pitch is sharp. An AI agent that can find the nearest 7-Eleven that prints A4 in color, queue your document, and tell you which exit to take from the station. That sentence sells itself. It works because it's specifically Japanese, mundanely useful, and immediately legible to anyone who has lived in Tokyo.
Build plan
- Day 1. Audit each chain's available endpoints. Some have official APIs (Lawson Smart Pickup), some require web scraping (FamilyMart's product locator), some are partner-only and skipped (7-Eleven's payment APIs).
- Day 2–3. Implement the core tool surface.
find_konbini(query, lat, lng, services?)returns nearest stores by chain with available services.list_services(chain, store_id)returns what that specific store offers (not every Lawson has loppy printing).print_document(...)generates a Lawson Smart Pickup or 7-Eleven netprint job (where APIs allow) and returns the user code. - Day 4. Add prompts. "Where can I print this PDF in color near Shibuya?", "Find a konbini near me that has the latest Mandarake catalog."
- Day 5. Resources. Trademark notes. Service capability matrix. Common error states (most stores ID-restrict alcohol/tobacco purchases at certain hours).
- Day 6. Tests, CI, npm publish, Glama and PulseMCP submission.
- Day 7. Launch. X thread with screenshot of Claude printing a PDF at the corner Lawson. Cross-post to r/Tokyo and r/Japanlife. Submit to Hacker News.
What kills this
ToS friction. Some konbini APIs are partner-gated. The realistic v0.1 scope covers store-finder and product-locator across all three chains, plus Lawson Smart Pickup printing because that endpoint is open. The v0.2 reach covers 7-Eleven netprint via their public form, with appropriate throttling and a clear "this is unofficial" disclaimer. Anything that requires merchant credentials is out of scope.
Why this one ships in a week. Konbini is a category Western developers cannot credibly build because they don't know what loppy is. It's also iconic enough to be screenshot-worthy. Every Tokyo developer becomes a free distribution channel.
DevFest Tokyo 2026 talk The hour Google Japan engineers spend listening, instead of you sending cold emails.
Status
CFP not yet open. Watch.
Effort
2 days CFP + 5 days deck if accepted.
Visibility ceiling
Direct face time, Google Japan recruiter pipeline.
Cost of trying
Two evenings of writing.
DevFest Tokyo is the largest Google Developer Group event in Japan, typically held in October or November. The audience includes Google Japan engineers (some of them as speakers, others as attendees), regional engineering leads from Google APAC, and a tail of Tokyo-based developers from gaishikei companies who use Google's infrastructure. It is the highest-density room of the people I want to meet.
CFPs typically open three to four months before the event. That means the window for applying is roughly July to August. Right now I'm preparing.
The talk
Working title: "Designing for the era when the user isn't always human: six MCPs from Tokyo."
Forty-five minutes. Three acts. Act one: how shipping software changed once Claude Code became the production environment, told through the actual evolution of pdf-it from blank file to npm package in seven days. Act two: what designing for AI agents as users looks like, using japan-ux-mcp as the case study. Act three: what stays human, what becomes machine, and what a senior product designer's day looks like in this stack.
The talk has to be a real talk, not a sales pitch. The job is to leave the room with a reputation, not a stack of business cards.
Why this works even if I don't get accepted
CFPs are reviewed by the same Google Japan organizers who later refer candidates to internal hiring teams. A well-written CFP is a portfolio artifact even when it loses. A rejection gets you the organizer's name and a "let's keep in touch" reply, which is the next-best outcome. Most engineers don't apply because of imposter syndrome. That asymmetry is the whole opportunity.
03 / Strong picksTen more, ranked by leverage
Solid second-tier bets. Any of these compounds the portfolio meaningfully if shipped.
Tokyo housing MCP Suumo and Athome as a structured agent surface.
Anyone moving to Tokyo discovers within their first week that finding an apartment in Japan is a UX nightmare. Suumo, Athome, and Homes.jp expose listings as semi-public web pages but no public structured search API exists. The opportunity is a scraping-respectful MCP that returns normalized listings (rent, key money, neighborhood, station distance, year built, gaijin-friendly flag) so an AI agent can actually compare apartments instead of asking the user to copy-paste forty links.
Visibility ceiling is high because the audience is every English-speaking expat in Japan, plus every Tokyo-based developer who would happily share it. Risk is ToS: scraping listings has legal grey areas. Solution is to operate on the same legal footing aggregators use, with appropriate rate-limiting, attribution, and a clear redirect to the source listing. Effort: 7–10 days for v0.1, longer for the gaijin-friendly inference logic.
jp-tax-mcp The qualified invoice system is brutal. An AI agent that handles it is selling itself.
Japan's tekikaku seikyusho (適格請求書) regime took effect in late 2023 and forced every freelancer above the consumption tax threshold to register, issue conformant invoices, and validate counterparty registration numbers. The system is a cliff of paperwork, and every Japan-based freelance dev or designer hates it. The MCP surface is small and high-utility: validate a registration number against the National Tax Agency's public database, generate a conformant invoice from line items, parse a received invoice for compliance checks.
The audience is hundreds of thousands of small Japanese businesses, plus every English-speaking freelancer in Japan who currently outsources this to an accountant at considerable expense. Risk is regulatory accuracy, which is solvable by being conservative and citing official guidance.
designer-mcp-kit
npx create-mcp-server for designers, not engineers.
Existing MCP scaffolds (create-mcp-server from Anthropic, various community templates) ship with engineer-default choices: bare TypeScript, raw zod schemas, no opinion about UX. A designer-focused starter would ship with a minimal styled output template, working examples for common patterns (resource-link, image, tool-output formatting), test scaffolding, and a SKILL.md template alongside. Goal: a designer can ship their first MCP in an evening instead of fighting boilerplate for a week.
Why I can ship this credibly: I've shipped six MCPs and have an opinion about the patterns that recur. Distribution is straightforward: npm + a launch post titled "How to ship your first MCP if you're a designer."
Bento An auto-updating personal site that's smarter than Linktree.
Designer-engineer hybrids need a personal site that pulls live signals: latest npm downloads, newest GitHub commit, last X post, current shipping status. Linktree is too dumb. A custom portfolio is too brittle. A small static site that rebuilds every six hours from public APIs (npm, GitHub, RSS, Bluesky) is just right. Bonus: ship it as an open-source template, so other designers can adopt the format and create network effects.
Visibility ceiling: medium. Designer Twitter loves this kind of meta-portfolio thing. Could go viral if the design is good enough, which it would be.
Onigiri Standup, but agentic.
Most teams still run async standups in Slack as a copy-paste ritual. Onigiri uses Google Calendar + Gmail + Drive MCPs to read your yesterday (calendar events you actually attended, commits you pushed, docs you edited), draft today's plan, and produce a designed digest as a PDF via pdf-it. The user just confirms or adjusts.
Why this beats existing standup tools: it doesn't ask you to write your standup, it reads what you actually did. Distribution: any solo founder, freelancer, or remote designer is a target user. Effort: 4–6 days because all the MCP infrastructure already exists.
hanko-mcp An MCP that generates and validates Japanese seal images.
Hanko (also inkan) are personal name seals used in Japan instead of signatures on contracts, government forms, and bank documents. The shapes follow specific conventions (round vs square, single column vs double column, kanji vs phonetic). Most foreign expats in Japan need one and don't know how to commission a correct one. A small MCP that generates appropriate seal images from a Japanese name (with kanji rendering, traditional border, optional aging effects), and validates uploaded seal images for common compliance issues (correct character usage, no border breaks).
Quirky enough to be memorable. Specifically Japanese. No competition. Effort: 3–5 days. Distribution: viral via the same channels as konbini.
stream-ui React components for showing LLM token streams beautifully.
Every AI app reinvents how to render streaming text, tool calls, citations, retries, and intermediate states. Some get it right (Claude artifacts, Cursor's diff view). Most get it wrong. A small, opinionated React library that handles streaming-text reveal, token cursors, citation chip rendering, tool-call display, and graceful error states would be widely adopted by anyone shipping a generic Claude or OpenAI app.
Visibility ceiling is high because the audience is every team building a chat UI in 2026. Effort: 7–10 days. Risk: someone else ships a 90% version first. Defense: design polish, opinionated defaults, and a strong narrative essay shipped alongside.
"Designing for Agents as Users" The long-form thesis that gives the rest of the work a backbone.
I already have futureproof.bymarsel.me as a planted flag. The next essay needs to be the one that crystallizes the agent-as-user thesis with worked examples and shippable proof. Five thousand words, fifteen visual examples, three live demos. Cross-post to Medium, dev.to, Increment if they're still publishing. Submit to Smashing Magazine and A List Apart.
Why this is leverage: a single canonical essay with the right framing becomes the link people share when they describe me. "Have you read his agents-as-users essay?" is the social transaction I'm trying to engineer.
a2ui-figma The round-trip plug-in nobody has shipped.
A2UI specs are JSON. Figma is a visual editor. Currently there's no clean way for a designer to take an A2UI spec generated by an agent, edit the layout in Figma, and export the changes back to A2UI for the agent to render. A Figma plug-in that ingests A2UI JSON and renders it as Figma frames (with the inverse for export) closes a real workflow gap. It would also be the moment Figma's own community team takes notice, because Figma has been actively pushing AI-generated UI workflows.
Effort: 10–14 days because Figma plug-in development has its own learning curve. Visibility ceiling: featured by both Figma and Google as the canonical A2UI–Figma bridge.
mcp-inspector-pro Anthropic's inspector is engineer-only. There's a designer one missing.
The official MCP Inspector is a functional tool that lets you call MCP server tools manually and see their JSON output. It is not designed; it is exposed. A designer-led version with visual tool-dependency graphs, resource preview rendering, prompt-call timeline, and an actual visual hierarchy would become the default tool any designer reaches for when working with MCPs. Open-source, single-page, deployable from anywhere.
Why this lands at Anthropic specifically: their team already knows the inspector is a stop-gap. A polished alternative that they can point users to is something they actively want to surface. Direct line into Anthropic's design and DevRel teams.
04 / Wild cardsThe five I'd take if I had range
Less obviously productive. Higher variance. Each one has a path to outsized recognition if it lands.
a2ui.tools An open-source A2UI playground anyone can use without setup.
A single-page web app where you paste an A2UI spec and see it render live, side-by-side with the JSON. Add a prompt box that lets you generate A2UI specs from a Gemini-API or Claude-API call (user provides their key). Add a gallery of saved prompts. Effectively a designer's REPL for generative UI.
This is bigger than a single weekend project but the right scope is achievable in three weeks. Risk: Google ships an official version first. Defense: ship faster, design better, and own the URL a2ui.tools if it's available (it is, as of writing).
"Designer Ships" — a weekly YouTube channel A content track in parallel to the build track.
Twenty-minute episodes, weekly. Each one ships something visible: an MCP, a Claude Code workflow, a redesign, a teardown of an AI app's UX. The format is build-along, screen-recorded, lightly narrated. Over six months, twenty-six episodes become the canonical reference for "designer who ships AI tooling" on YouTube, because the genre is currently empty. Most AI YouTube is engineer-led or hype-led; almost none is designer-led with the actual ship.
Honest risk: YouTube is an unforgiving format and weekly cadence is brutal. The strategic argument is that the discipline of producing forces a faster build pace, which compounds with the rest of this list. Even fifteen episodes shipped well would be a singular asset.
Indonesian-SEA payments suite GoPay, OVO, DANA, GCash, ShopeePay as a single MCP family.
Southeast Asia is the second-largest payments market after India by user count, and almost no MCP coverage exists. The unique angle: I'm Indonesian by passport and Filipino by heritage. I can credibly ship MCPs across these regions in a way no Western developer can. The pitch is "agentic commerce for half a billion users" with real APIs and real markets behind it.
Effort: 4–6 weeks for the full suite, much shorter for any single one. Visibility ceiling: featured by regional press, Indonesian and Filipino dev Twitter, possibly conference invites in Jakarta or Manila.
A Claude Code subagent for Japan UX review Different from japan-ux-mcp. A reviewer, not a generator.
japan-ux-mcp generates Japanese-correct UI. The complementary surface is a Claude Code subagent that reviews existing UI for Japan-readiness and produces a written audit. Triggered by a slash command (/jp-review), takes a URL or markup, returns a graded report covering forms, copy, typography, trust, mobile, and cultural fit. Output is a markdown file, optionally rendered to PDF via pdf-it.
Why this matters: Anthropic's subagent format is new and underused. A polished, vertical subagent (Japan UX) is a model of how to use the format well. Lands as both a useful tool and a template.
Material Design 4: Speculative Spec A 30-page proposal for what Material looks like AI-native.
Material Design 3 was published in 2021 and assumes humans typing on phones. The fourth iteration, whenever Google ships it, will need to account for agents generating UI on the fly, dynamic A2UI rendering, multi-modal interaction, and accessibility patterns for screen-reader users navigating agent output. Most of this hasn't been articulated.
A speculative spec by an outsider designer who has actually shipped agent UI is exactly the kind of artifact Material's team would notice. Even if Google doesn't adopt any of it, the document becomes a reference cited by other designers writing about generative UI, and that citation network compounds. Effort is heavy: 4–6 weeks of focused writing and visual design. The payoff, if it lands, is durable.
05 / Anti-build movesRecognition surfaces that don't require shipping a project
Parallel track. Most of these I should be doing alongside whatever I'm building.
| Move | Effort | What it gets me |
|---|---|---|
| Apply to Google Developer Expert (GDE) program | 1 week of paperwork after 3+ Google ecosystem projects. | Formal recognition, hiring pipeline visibility, GDE-only events. |
| Submit DevFest Tokyo 2026 talk | 2 days for CFP. Watch July–August. | Direct face time with Google Japan and APAC engineers. |
Open-source contribution to google/A2UI |
Per PR, low. Cumulative is the value. | Direct codepath into Google's tree. Names get noticed. |
| Pitch Google Cloud Community Medium | One essay every 4–6 weeks. | Hosted by Google's official Medium publication, syndicated by DevRel. |
| Submit Gemini API showcase | One demo, fully documented. | Featured on Google AI for Developers gallery. |
| Apply to Anthropic's Build with Claude showcase | One project + writeup. | Tweeted by Anthropic, often by individuals on the team. |
| Speak at Tokyo MCP / AI meetups | Per talk, 3–5 days. | Recurring local visibility. Several Tokyo meetups exist. |
Of these, GDE and the DevFest Tokyo CFP are the two highest-leverage. GDE because it is a formal program with a known hiring tail. DevFest because Tokyo's engineering community is small enough that one strong talk lands.
06 / ValidationThe 5-minute pre-build check
Before committing time to any of these, run this checklist. The cost of skipping it is shipping into a saturated space.
- npm. Search the proposed package name and adjacent terms.
npmjs.com/package/<name>. If a v1.0+ exists with active downloads, pivot the name. If a stub exists, contact the author or pick a different name. - GitHub. Search the concept across GitHub. If three or more repositories in the last six months target the same surface, the space is saturated.
- MCP registries. Glama, PulseMCP, mcp.so, mcpmux. A new MCP that overlaps with three existing ones needs a strong differentiation argument, not just better polish.
- Google's own samples. Check
google/A2UI,google/adk-samples,google-gemini/*. If Google has shipped or announced something close, the official path will outpace any community version. - Twitter and Reddit search. The concept name, the user-facing pitch, plus relevant hashtags. If the discourse is already happening, ride it; if it's not, the project itself starts the conversation.
- Underlying API access. For wrapper MCPs and integration projects, confirm the API exists, is accessible without partner status, and the ToS doesn't prohibit the use case. This kills more projects than competition does.
The whole check is fifteen minutes per idea. Skipping it is how you find out two weeks in that someone else shipped the same thing on the day you started.
07 / SourcesThe reading behind the bets
Everything that shaped the conclusions, with dates and attribution. Verify before building.
- Introducing A2UI: An open project for agent-driven interfaces · Google Developers Blog, December 15, 2025
- A2UI v0.9: The New Standard for Portable, Framework-Agnostic Generative UI · Google Developers Blog
- google/A2UI · the official repo, contributor guidelines, and current sample list
- A2UI in the World · community showcase index
- A2UI Spring 2026 Update · OpenClaw integration, ADK Python ecosystem expansion
- Announcing official MCP support for Google services · Google Cloud Blog
- google/adk-samples · ADK reference applications
- GDG Tokyo events · DevFest CFP timing reference
- Google Developer Experts program · application requirements
- Figma MCP server announcement · design-side reference for the MCP design surface gap