Will robots undermine the value of living?
I have written before about the impact of the technologies of the fourth industrial revolution – robotics, artificial intelligence, machine learning – on the job market. This revolution will increasingly polarize labor as low-skilled jobs are automated and the same spreading to middle class jobs – e.g., financial services, business process outsourcing, areas that some are considering as diversification objectives.
This revolution could yield inequity, particularly in its ability to disrupt labor markets – restaurants are already toying with the robots to replace service staff. The rewards of the revolution will shift even more so to the highly skilled, followed by investors and capital. These technologies will displace workers so much so that full employment will not be necessary to produce the goods and services required by the economy.
Since being able to have a job allows one to purchase goods and services, how will the unrequired workers survive? Further, with many of our goods and services done by digital and robotic technologies, how will they be sold if the mass of the population does not have the money to purchase them?
The only solution, bar constraining the application of these technologies, is a technique called social credit, devised by the Scottish engineer, C. H. Douglas. Social credit is based on the observation that the production effort is grounded in the Earth’s natural wealth and the accumulated technological and cultural achievements of all of us and our ancestors.
As a result of this inherited common wealth, social credit calls for a dividend, derived from debt free newly created money to be regularly paid to everyone in society, irrespective of whether one is employed or not. Silicon Valley is talking about free money and recently it was the subject of a Swiss referendum.
This approach provides free money to the population so that they can purchase the required goods and services generated by the robots and other digital technologies. However, the major problem that will also have to be addressed is the value structure of living when the population, for example, does not have to provide for themselves and families by their own decisions and efforts.
In this respect they cease being moral agents and become moral patients when the moral responsibility for such tasks is taken over by others – the robots do the work. The moral agent is one who can acquire and process information from its surroundings, use it to develop goals and action plans and has the ability to implement these plans and in so doing takes moral ownership for these actions.
A moral patient is owed moral duties, is capable of suffering moral harms and obtaining moral benefits, but does not take ownership over the moral content of its own experience. Moral agency is the foundation of the value structure of modern liberal-democratic states. (Reference: “The Rise of Robots and the Crisis of Moral Patiency” by John Danaher of NUI Galway).
Consider, a similar example, the change that can happen to a worker as he ages. At one time he was able to provide for his family, he built a house which he continually upgraded and maintained; he helped others in the community using his skills, and by his efforts acquired other properties which he planted to produce fruits for sale. If he was not at his normal place of work or sleeping, he was doing something. He was a moral agent and providing for himself and others was the objective of his life. It gave value to his existence.
Eventually he got old and suffers now from the ravages of time, old age. He is well looked after by his family. However, he complains bitterly of his inability to, for example, climb a ladder to fix his roof, to prune the trees in his estate, to do the things that made life worthwhile in his younger days. He has become a moral patient and hates it intensely.
Generally, technology relieves us of the drudgery of doing certain tasks – e.g., machines, power tools – and makes our role of moral agents easier, efficient and effective. The fourth industrial revolution is robbing many of us of our jobs, even though we, via social credit, will be able to support ourselves; we can feel in certain ways useless as we become in part moral patients, unless we can find other avenues to express ourselves as moral agents.
Surely we will have more leisure (free) time and may be able to find something to do intellectually or in the artistic or cultural sphere, or engage in helping those who cannot help themselves – the sick or disabled.
The rise of robots will lead to the decline of the human moral agency and the rise of moral patiency in us. The question then is, given the value that being a moral agent lends to us all, why are we not taking control of the technologies to ensure that patiency does not arise; they, the technologies, do not have a will of their own? Do we not have the power to shape the technologies and not use them to undermine this basic tenet of value of life?
However listen to Danaher: “The development of robots and AI in manufacturing and service industries is driven by the seductive appeal of economic success; the growth of robots and AI in government stems from a commitment to generally-accepted values (efficiency, accuracy, cost-effectiveness) and from the need to respond to the complexity of the outside world; and, finally, our increasing willingness to outsource personal moral development to robots and AI is facilitated by psychological biases and heuristics. In other words, the trends stem from forces that are, to a considerable extent, larger and more powerful than us.”
– Mary K. King