Increasing positive outcomes for businesses and the people they serve in a deeply technological world
The broad effects of technology are creating new demands for organizations to adopt practices that avoid negative secondary impacts from the products and services they design and deliver.
In our deeply technological ecosystem, the complex interplay between consumers and channels of influence and interaction make it difficult to predict potential effects. Pair that with business priorities and the metrics driving our markets, and it is not difficult to understand why organizations aren’t able to curb the unfavourable effects of their inventions that play out across the news, and that we can feel in our homes.
Despite this, it’s a great time to formulate and deploy countermeasures. There is a groundswell of interest in these problems, and our appreciation of how to build effective experiences for humans is at an all-time high. As we evolve our design and delivery practices to consider a fuller spectrum of emergent impacts and opportunities, we will be improving our relationship with all of our stakeholders and finding new ways to to create value — that is fundamentally good business.
Root causes and ramifications
It’s not easy being human in a technological world.
Well, that sounds a bit ironic. After all, humans have developed and continue to develop technology (at least for now) and most of it with the intention of improving our lives. Clearly, people have been living with technology for hundreds of years and with measurable societal improvements (check out Enlightenment Now by Steven Pinker). However, technology today intermediates most information flow and transforms our physical environment (consider IoT and ambient computing). This unprecedented access to the human mind and body changes the game.
Writers like Yevgeny Zamyatin, Aldous Huxley and George Orwell were imagining dystopian futures during the first half of the 20th century. However, most pundits didn’t see the potential threat. Instead, in 1930 John Maynard Keynes famously predicted that technology would have reduced our work week from five to two days by our current time. That certainly didn’t come to pass. Whether or not work has actually increased, with an intensified level of competition and complexity that demands our near continual attention, it undeniably feels as though it has.
As the rate of technological development soars, people are confronting new and more acute stresses in their relationship to technology, highlighting unforeseen friction and leading to diverse pathologies with no clear antidotes— We can no longer rely on evolution to adapt. These psychological and physical burdens often stem from misaligned goals with our tech creations.
The dehumanization of the human experience — examples
- Screen & gaming addiction
- Misinformation, cognitive bias hacking, filter bubbles
- Self-image & self-esteem
- Loss of traditional jobs
- Privacy breaches
- Profiling & voter influence
- Learning delays & anxiety in children
- Productivity obsession
- Sleep deprivation
- Loss of empathy
- Nutritional reduction & obesity
- Increasing personal debt
Should we consider a neo-Luddite revolution? Perhaps there will be a species-level immune response to this growing list. Tiffany Shlain’s Tech Shabbat may be an example of this — reflecting our need to detox.
Technology may be a threat, but it also represents a major aspect of human potential. There is a wonderful poetry in viewing humanity’s drive, in the face of all obstacles, to exceed our constraints: To continually learn, define and redefine — In short, to be a creative force of nature. Technology may be one of the major expressions of this spirit.
So, we won’t trash our tech just yet.
That leaves us to consider other approaches to mitigate the friction and secondary impacts we experience, and to deliver more human positive outcomes in technology design and development.
Reactions and responses
There’s been a surge of media attention, academic analysis and public outrage concerning the negative impacts of technology on our daily lives. Though the criticism has largely been centred on a handful of companies, many large corporations are taking note. Their response has been to stand up ethics boards (and then dismantle them in Google’s case) and hope they can stay ahead of regulation. We need more practical ways for organizations to find the opportunities to improve humans relationship to technology and make ethics actionable.
Media attention: Mostly blaming big tech for privacy breaches and platforms that surreptitiously influence people or can be misused by nefarious third parties. ‘Let’s bash Facebook!’
Academic and popular analysis: Yuval Harari, Maryanne Wolf, Nick Bostrom, Andrew Ng, Shoshana Zuboff, Douglas Rushkoff and many others explore our dichotomous relationship with technology.
Nascent organizations: Groups aiming to influence public opinion, tech CEOs, and policy makers like Tristan Harris’ Center for Humane Technology, are largely focused on what’s alternately called the attention economy and surveillance capitalism. The Future of Life Institute, founded by Max Tegmark, with famous members like Elon Musk, is concerned with existential risks like climate change, nuclear threats and the future of AI.
Governments, G7, UN, NGOs: Recognizing the threat of AI on the future of employment and productivity. See also the US Congress’s obsession with media ‘bias’ against their particular agendas. For more productive thinking, check out the World Economic Forum’s Centre for the 4th Industrial Revolution.
Ethical corps: Accreditation like B Corporation and Salesforce’s 1–1–1 philanthropic model suggest some openness to measuring benefits for humanity as part of the corporate mandate. Formation of ethics groups in large organization are further evidence of awareness.
Fear and anger: Stoked by the media and uncertain political leadership, people are afraid of the future, but are also looking at aspects of their current challenges and seeing the link to the technologies they consume. We’re also seeing employees consider the impacts of their work on others, such as with Google’s censorship of search results.
Craig Brod, author of Technostress: The Human Cost of the Computer Revolution (1984). With chapters like “Robot/Human Human/Robot”, “An End to Romance”, and “Childhood Lost”, Brod was early to understand the potential influence of computers on human psychology with this book. Just before it was published, I’d bought an Apple IIe, and with a prescient fear for my sanity, my uncle bought me the book. I should have read it.
“As the nature and needs of the self are altered by electronic space, so is the nature of love.“
- Craig Brod, Technostress
What is human positive design?
Human positive design (h+d) is the practice of developing products and services that aim to avoid negative and unintended secondary impacts on people, including the unintended use by malignant parties. It can penetrate most aspects of how organizations operate and deliver value to increase positive outcomes for people in addition to satisfying business goals.
It sounds a lot like human centred design, and they certainly do intersect. Human positive design shines a light on what human centred design should be in its ideal practice. It does the same for behavioural economics. While both human centred design and behavioural economics have deep appreciation of the human condition, their application of that knowledge most often serves the goals of the business or organization employing them. Their starting point is not concerned with finding ways for the technology we interact and intersect with to have positive outcomes for humans. h+d, however, is both included within these practices and inclusive of them. I’ll explore this further as we consider how a human positive design practice can have a broad organizational impact.
As terms go, I like the use of ‘design’ in h+d as the word has become a fundamental enough concept that it is sufficient for describing something far reaching in the operation of organizations. ‘Human positive’ should be pretty obvious, though arguably a bit narrow given that it doesn’t explicitly include our environment, flora, fauna, future sentience, and other stuff we should care about. It started as a placeholder, and then it stuck. It’s good enough.
I began using the term a year ago while having conversations about an old investment thesis I was trying to resurrect. It stemmed from my personal experiences with information overload and manufactured complexity. It reflected a basic entrepreneurial instinct to start businesses that solved common problems people faced.
I outlined a few principles and then started to think of ideas for technology I could develop to solve, well, our problems with technology. I was interested in ideas that could protect and enhance people. About seven years ago, one of the first business aspirations I had and prototypes I built based on this thesis was called dskew. It aimed to help people understand the skewing of the relative importance of stories in the media while also measuring bias and other characteristics that would allow readers to get a more complete perspective. Never launched it, though.
It occurred to a friend of mine that these principles, and others in the same vein, could become the basis of a practice that helps organizations to adopt them. We had experience in enterprise transformation and knew how a set of principles can be packaged and injected into the operating system of an organization — elements like governance, process, org. structures, performance management, DevOps, DesignOps, product management, and so on. We convened a group of like-minded and experienced collaborators and began brainstorming how we could use tools like Strategic Foresight as jumping off points for practical exercises to explore secondary impacts and opportunities. The work continues.
Why will businesses care?
In Deloitte’s 2019 executive survey assessing readiness for the Fourth Industrial Revolution, leaders rated societal impact as the most important factor when evaluating their organizations’ annual performance. They rated it ahead of both financial performance and customer or employee satisfaction.¹ Further demonstrating that finding, the CEOs at the Business Roundtable in August signed a letter committing their corporations to creating value for all stakeholders (including customers, employees, suppliers, community and the environment), not just shareholders.²
That’s a good sign. CEOs will be interested to learn about tools to achieve nebulous societal impact goals — especially tools that are tied to the driving factors of the Fourth Industrial Revolution itself. I’ve boiled it down to three reasons that focus on risk management and increased opportunities for growth:
1. Sustainable & loyal customers (77% of consumers purchase brands they trust³): Reducing negative impacts will increase trust, lead to smoother adoption, increased engagement, as well as improving the mid- and long-term viability of products and services. And healthy engagement will lead to more healthy engagement.
2. Inspired & engaged employees ($450B estimated lost revenue in US due to disengaged employees⁴ & 7.6% higher stock performance by companies with purpose clearly understood by employees⁵): Motivated and purpose-driven employees are more creative and retain-able. This new generation of workers are demonstrably more demanding of their employers in these areas.
3. New opportunities & growth (Purpose-driven companies perform 28x better than national average⁶): Learning to explore the challenges people have adapting to our technologically driven world will lead to enhanced or new products and services, and a real innovation capability.
There are numerous directions h+d can take, and many may well be underway. We can build Venture Studios (startup co-founding organizations) focused on improving the human-tech symbiosis with a focus on the psychophysiological relationship to technology. We can create and evolve certification organizations that help parents decide what apps are psychologically safe for their kids. And we can invent models of cognitive behaviour to help companies build human positive testing right into their DevOps pipelines.
At Rangle.io, we are building a practice and community around how to help enterprises integrate h+d into the way they devise and apply technology contributing to a human positive future. A few high-impact starting points we’re exploring include applying strategic foresight during product planning, writing user stories that account for human positive dimensions, measuring new metrics to align teams, and expanding approaches to quality assessment.
We are just at the start of this journey and would love to hear from people and organizations interested in furthering the discussion and collaborating on practical ways to increase human positive outcomes.