Building Ethical AI While Living Out of a Backpack
I’m writing this from an airport lounge, watching boarding groups shuffle forward while my laptop battery ticks down faster than expected. Somewhere between the security line and a mediocre espresso, I squeezed in a client call. Nothing dramatic happened — which, after more than a decade of working remotely, feels like a minor victory.
When I started this journey fifteen years ago, I didn’t know what I was getting myself into. Now, seventy-four countries later, my life is stitched together by Wi-Fi passwords, calendar invites, and the vague idea of where I’ll be allowed to stay for the next three months. Freedom, flexibility, and breathtaking sunsets quickly lose their allure. What replaces them is something quieter and more demanding: responsibility without routine, and commitment without a fixed home.
That tension has followed me directly into my work. I now spend much of my time helping build AI tools, which brings its own set of contradictions. The systems I work on are meant to scale, standardize, and optimize — while my own life remains fragmented, temporary, and shaped by borders I don’t control.
I’m currently involved with an Italian startup working in the area of AI and translation, a field that sits uncomfortably at the center of today’s automation debates. Translation is often framed as low-hanging fruit for AI: something that can be made faster, cheaper, and eventually invisible. But working close to the problem raises a harder question. What happens when automation touches a field built on human judgment, cultural nuance, and accountability?
Translation has never felt abstract to me. Over the years, it has been the difference between understanding and confusion, access and exclusion, feeling at home and feeling unmistakably foreign. Languages aren’t just systems to optimize; they carry context, intention, and risk. Anyone who has dealt with a mistranslated contract or a misunderstood medical instruction understands this immediately.
The dominant narrative around AI tends to flatten these complexities. Translation becomes “solved,” and the people behind it become inefficiencies to be removed. But in practice, scale often increases risk, not certainty. The more content moves across languages, the more judgment matters — especially when mistakes have real consequences.
Working on these tools has made the ethical tension impossible to ignore. It’s tempting to automate more, remove friction, and push humans further from the decision-making process. Doing so is usually faster, easier, and more attractive on paper. Resisting that pull means accepting limits, uncertainty, and slower progress — choices that don’t sit comfortably in an industry obsessed with speed.
This conflict feels sharper because of how I live. I don’t have a permanent home or a single labor market. My days are shaped by visa rules, border crossings, and unreliable hotel Wi-Fi. Yet the systems being built in this space don’t pause just because their builders are in transit. They continue to shape how work is distributed and whose labor remains visible.
There’s a dissonance in helping design long-term technological systems while living a life defined by short stays. Over time, it becomes clear how many decisions in tech are made by people insulated from their effects — settled, secure, and far removed from the precarity those decisions can introduce. Constant movement has made me more aware of how fragile access to work can be, and how quickly “efficiency” turns into exclusion.
Spending time working with teams in Italy, particularly in and around Bologna, reinforced this awareness. There’s a strong tradition of linguistic craft there — an understanding of language as something shaped carefully, not processed indiscriminately. From that perspective, removing the human element entirely doesn’t read as progress. It reads as erosion.
There’s no clean answer to any of this. “Ethical AI” isn’t a clear category, and it’s certainly not a fixed achievement. The tools being built today will continue to change how work is done. They’ll still benefit from systems that reward scale and speed. The difference lies in what we choose not to automate, and what complexity we’re willing to leave unresolved.
I’ll be moving again next month, though I don’t yet know where. I don’t know how translation — or artificial intelligence more broadly — will ultimately embed itself in the global economy. What I do know is that constant mobility doesn’t remove responsibility. If anything, it sharpens it. Living without a fixed home makes the consequences of technological choices harder to ignore — and highlights how easily people can be left behind when progress accelerates without care.
Anthony Neal Macri is a consultant and strategist with over 15 years of experience working at the intersection of digital growth, technology, and global mobility. He has worked remotely across 70+ countries, advising startups and international companies on user acquisition, localization, and scalable digital systems.
His work currently includes collaboration with LanguageCheck.ai, where he is involved in building AI tools for translation quality and risk detection, with a focus on keeping human expertise central to language workflows. Anthony writes and speaks about remote work, ethical technology, AI and labor, and the practical consequences of building global systems in a borderless world.




