This page has only limited features, please log in for full access.

Dr. Henrik Skaug Sætra
Department of Business, Languages and Social Sciences, Østfold University College, Halden 1757, Norway.

Basic Info


Research Keywords & Expertise

0 Artificial Intelligence
0 Environmental Ethics
0 Game Theory
0 Sustainability
0 Technology

Fingerprints

Artificial Intelligence
Technology
social robots
Sustainability

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Henrik Skaug Sætra is an associate professor at the Faculty of Business, Languages, and Social Sciences at Østfold University College. He is a political scientist with a broad and interdisciplinary approach to issues of ethics and the individual and societal implications of technology, environmental ethics, and game theory. Sætra has, in recent years, worked extensively on the effects of technology on liberty and autonomy, sustainability, and on various issues related to the use of social robots.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Perspective
Published: 06 August 2021 in Healthcare
Reads 0
Downloads 0

Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that robots should have carried out in the very first place and never by humans. Understanding the impacts of digitalisation in the workplace requires an understanding of the effects of digital technology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understanding the actual magnitude of the changes that are occurring and creates the foundation for an informed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.

ACS Style

Henrik Sætra; Eduard Fosch-Villaronga. Healthcare Digitalisation and the Changing Nature of Work and Society. Healthcare 2021, 9, 1007 .

AMA Style

Henrik Sætra, Eduard Fosch-Villaronga. Healthcare Digitalisation and the Changing Nature of Work and Society. Healthcare. 2021; 9 (8):1007.

Chicago/Turabian Style

Henrik Sætra; Eduard Fosch-Villaronga. 2021. "Healthcare Digitalisation and the Changing Nature of Work and Society." Healthcare 9, no. 8: 1007.

Perspective
Published: 29 July 2021 in Sustainability
Reads 0
Downloads 0

Artificial intelligence (AI) now permeates all aspects of modern society, and we are simultaneously seeing an increased focus on issues of sustainability in all human activities. All major corporations are now expected to account for their environmental and social footprint and to disclose and report on their activities. This is carried out through a diverse set of standards, frameworks, and metrics related to what is referred to as ESG (environment, social, governance), which is now, increasingly often, replacing the older term CSR (corporate social responsibility). The challenge addressed in this article is that none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. This creates a situation in which companies are not incentivised to properly analyse such impacts. Simultaneously, it allows the companies that are aware of negative impacts to not disclose them. This article proposes a framework for evaluating and disclosing ESG related AI impacts based on the United Nation’s Sustainable Development Goals (SDG). The core of the framework is here presented, with examples of how it forces an examination of micro, meso, and macro level impacts, a consideration of both negative and positive impacts, and accounting for ripple effects and interlinkages between the different impacts. Such a framework helps make analyses of AI related ESG impacts more structured and systematic, more transparent, and it allows companies to draw on research in AI ethics in such evaluations. In the closing section, Microsoft’s sustainability reporting from 2018 and 2019 is used as an example of how sustainability reporting is currently carried out, and how it might be improved by using the approach here advocated.

ACS Style

Henrik Sætra. A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs. Sustainability 2021, 13, 8503 .

AMA Style

Henrik Sætra. A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs. Sustainability. 2021; 13 (15):8503.

Chicago/Turabian Style

Henrik Sætra. 2021. "A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs." Sustainability 13, no. 15: 8503.

Conference paper
Published: 03 July 2021 in Algorithms and Data Structures
Reads 0
Downloads 0

Autism Spectrum disorder is a neurodevelopmental disorder characterized by early onset difficulties in social communication, and unusually restricted, repetitive behaviors and interests. Children with autism are often not naturally as motivated as others by social interaction and situations involving dyadic play, and this is considered to be problematic as both are essential arenas for young children’s development. In this article we explore how computer-based interventions in various guises show some promise in engaging and motivating children with autism to engage in play. This has led to the development of a range of computer-based interventions aimed at fostering learning, and these range from the use of VR to regular computer games to the use of social robots with artificial intelligence. We evaluate the evidence gathered on the use of various forms of computer-based play and also discuss the potential ethical implications of such interventions.

ACS Style

Christine Dahl; Henrik Skaug Sætra; Anders Nordahl-Hansen. Computer-Aided Games-Based Learning for Children with Autism. Algorithms and Data Structures 2021, 145 -158.

AMA Style

Christine Dahl, Henrik Skaug Sætra, Anders Nordahl-Hansen. Computer-Aided Games-Based Learning for Children with Autism. Algorithms and Data Structures. 2021; ():145-158.

Chicago/Turabian Style

Christine Dahl; Henrik Skaug Sætra; Anders Nordahl-Hansen. 2021. "Computer-Aided Games-Based Learning for Children with Autism." Algorithms and Data Structures , no. : 145-158.

Journal article
Published: 05 May 2021 in Education Sciences
Reads 0
Downloads 0

Students often perceive statistics as a difficult subject, and it is frequently named as one of the primary causes of high dropout rates in economics educations in Norway. In order to support the learning process in statistics courses, and in order to make the courses more flexible, the author experimented with the use of Padlet in two different student groups taking an introductory course in statistics for economists. The purpose was to overcome the difficulty of engendering social engagement and activity and fostering effective mediation, scaffolding and collaborative learning in large student groups scheduled for traditional lectures in large auditoriums. The author’s experiences and the students’ evaluations of the model is presented here, along with the theoretical justification of the use of Padlet and the context in which it was tested. The results show that computer-supported collaborative learning can be an effective supplement or alternative to traditional study groups for those that either prefer this or cannot take part in regular study groups. The students used Padlet actively, and a majority of the students reported that it was a significant or highly significant factor in their learning process.

ACS Style

Henrik Sætra. Using Padlet to Enable Online Collaborative Mediation and Scaffolding in a Statistics Course. Education Sciences 2021, 11, 219 .

AMA Style

Henrik Sætra. Using Padlet to Enable Online Collaborative Mediation and Scaffolding in a Statistics Course. Education Sciences. 2021; 11 (5):219.

Chicago/Turabian Style

Henrik Sætra. 2021. "Using Padlet to Enable Online Collaborative Mediation and Scaffolding in a Statistics Course." Education Sciences 11, no. 5: 219.

Journal article
Published: 05 February 2021 in Sustainability
Reads 0
Downloads 0

Artificial intelligence (AI) is associated with both positive and negative impacts on both people and planet, and much attention is currently devoted to analyzing and evaluating these impacts. In 2015, the UN set 17 Sustainable Development Goals (SDGs), consisting of environmental, social, and economic goals. This article shows how the SDGs provide a novel and useful framework for analyzing and categorizing the benefits and harms of AI. AI is here considered in context as part of a sociotechnical system consisting of larger structures and economic and political systems, rather than as a simple tool that can be analyzed in isolation. This article distinguishes between direct and indirect effects of AI and divides the SDGs into five groups based on the kinds of impact AI has on them. While AI has great positive potential, it is also intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. The conceptual framework here presented helps structure the analysis of which of the SDGs AI might be useful in attaining and which goals are threatened by the increased use of AI.

ACS Style

Henrik Skaug Sætra. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 2021, 13, 1738 .

AMA Style

Henrik Skaug Sætra. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability. 2021; 13 (4):1738.

Chicago/Turabian Style

Henrik Skaug Sætra. 2021. "AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System." Sustainability 13, no. 4: 1738.

Article
Published: 01 January 2021 in Paladyn, Journal of Behavioral Robotics
Reads 0
Downloads 0

Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.

ACS Style

Henrik Skaug Sætra. Social robot deception and the culture of trust. Paladyn, Journal of Behavioral Robotics 2021, 12, 276 -286.

AMA Style

Henrik Skaug Sætra. Social robot deception and the culture of trust. Paladyn, Journal of Behavioral Robotics. 2021; 12 (1):276-286.

Chicago/Turabian Style

Henrik Skaug Sætra. 2021. "Social robot deception and the culture of trust." Paladyn, Journal of Behavioral Robotics 12, no. 1: 276-286.

Journal article
Published: 23 September 2020 in Technology in Society
Reads 0
Downloads 0

Privacy relates to individuals and their ability to keep certain aspects of themselves away from other individuals and organisations. This leads both proponents and opponents of liberalism to argue that liberalism involves allowing individuals to determine for themselves the level of privacy they desire. If they are given adequate information and the ability to choose, the results are argued to be legitimate, even if individuals choose to bargain away all or most of their privacy in return for convenience, economic benefits, etc. However, the individualistic approach to privacy is insufficient, due to a set of externalities and information leakages involved in privacy issues. A crucial aspect of privacy is that it is an aggregate public good, and recognising this lets us see why government intervention is both beneficial and necessary for securing the provision of optimal levels of privacy. This conception of privacy enables us to treat it as a good that is underprovided due to market failure. The article shows how liberals can justify government interference for the protection of privacy by relying on the avoidance of harm, and not on paternalism or other arguments not easily reconcilable with liberalism.

ACS Style

Henrik Skaug Sætra. Privacy as an aggregate public good. Technology in Society 2020, 63, 101422 .

AMA Style

Henrik Skaug Sætra. Privacy as an aggregate public good. Technology in Society. 2020; 63 ():101422.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "Privacy as an aggregate public good." Technology in Society 63, no. : 101422.

Journal article
Published: 06 September 2020 in Technology in Society
Reads 0
Downloads 0

Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.

ACS Style

Henrik Skaug Sætra. The foundations of a policy for the use of social robots in care. Technology in Society 2020, 63, 101383 -101383.

AMA Style

Henrik Skaug Sætra. The foundations of a policy for the use of social robots in care. Technology in Society. 2020; 63 ():101383-101383.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "The foundations of a policy for the use of social robots in care." Technology in Society 63, no. : 101383-101383.

Journal article
Published: 16 July 2020 in The Humanistic Psychologist
Reads 0
Downloads 0
ACS Style

Henrik Skaug Sætra. Toward a Hobbesian liberal democracy through a Maslowian hierarchy of needs. The Humanistic Psychologist 2020, 1 .

AMA Style

Henrik Skaug Sætra. Toward a Hobbesian liberal democracy through a Maslowian hierarchy of needs. The Humanistic Psychologist. 2020; ():1.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "Toward a Hobbesian liberal democracy through a Maslowian hierarchy of needs." The Humanistic Psychologist , no. : 1.

Arena of health
Published: 04 July 2020 in Human Arenas
Reads 0
Downloads 0
ACS Style

Henrik Skaug Sætra. First, They Came for the Old and Demented:. Human Arenas 2020, 1 -19.

AMA Style

Henrik Skaug Sætra. First, They Came for the Old and Demented:. Human Arenas. 2020; ():1-19.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "First, They Came for the Old and Demented:." Human Arenas , no. : 1-19.

Journal article
Published: 22 June 2020 in Barataria. Revista Castellano-Manchega de Ciencias Sociales
Reads 0
Downloads 0

God gave us the Earth, to use and enjoy. So says the Bible, and so says John Locke (1632-1704). The individualism and liberalism in Locke’s philosophy makes it decidedly modern and appealing to us today. However, he often uses God as a source of truth and premises in his arguments. This undermines the modern appearance and leaves us with a philosophy that is at times contradictory, at times brilliant, and at all times fixed to the anthropocentric rail that guides his philosophy. In this article, the element of Locke’s philosophy that concerns humanity’s relationship with the natural world is examined. Particular attention is paid to the value and nature of both biotic and abiotic nature. I argue that the religious aspects of Locke’s philosophy cannot be fully purged in an effort to create a pure rationalist, and this leads me to focus on how the religious aspects relate to Locke’s rationalism, and in particular what implications his combination of philosophy and theology carries for the prospects of a Lockean environmentalism. I conclude that such environmentalism has clear limitations, while still providing certain foundations for the idea of sustainability and scientific conservationism.

ACS Style

Henrik Skaug Sætra. The limits of a Lockean Environmentalism: God, Human Beings, and Nature in Locke's philosophy. Barataria. Revista Castellano-Manchega de Ciencias Sociales 2020, 1 -17.

AMA Style

Henrik Skaug Sætra. The limits of a Lockean Environmentalism: God, Human Beings, and Nature in Locke's philosophy. Barataria. Revista Castellano-Manchega de Ciencias Sociales. 2020; (27):1-17.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "The limits of a Lockean Environmentalism: God, Human Beings, and Nature in Locke's philosophy." Barataria. Revista Castellano-Manchega de Ciencias Sociales , no. 27: 1-17.

Journal article
Published: 08 June 2020 in Technology in Society
Reads 0
Downloads 0

Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is particularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.

ACS Style

Henrik Skaug Sætra. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technology in Society 2020, 62, 101283 .

AMA Style

Henrik Skaug Sætra. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technology in Society. 2020; 62 ():101283.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government." Technology in Society 62, no. : 101283.

Correction
Published: 22 May 2020 in Integrative Psychological and Behavioral Science
Reads 0
Downloads 0

The original version of this article unfortunately contained a mistake.

ACS Style

Henrik Skaug Sætra. Correction to: The Parasitic Nature of Social AI: Sharing Minds with the Mindless. Integrative Psychological and Behavioral Science 2020, 54, 327 -327.

AMA Style

Henrik Skaug Sætra. Correction to: The Parasitic Nature of Social AI: Sharing Minds with the Mindless. Integrative Psychological and Behavioral Science. 2020; 54 (2):327-327.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "Correction to: The Parasitic Nature of Social AI: Sharing Minds with the Mindless." Integrative Psychological and Behavioral Science 54, no. 2: 327-327.

Regular article
Published: 17 March 2020 in Integrative Psychological and Behavioral Science
Reads 0
Downloads 0

Can artificial intelligence (AI) develop the potential to be our partner, and will we be as sensitive to its social signals as we are to those of human beings? I examine both of these questions and how cultural psychology might add such questions to its research agenda. There are three areas in which I believe there is a need for both a better understanding and added perspective. First, I will present some important concepts and ideas from the world of AI that might be beneficial for pursuing research topics focused on AI within the cultural psychology research agenda. Second, there are some very interesting questions that must be answered with respect to central notions in cultural psychology as these are tested through human interactions with AI. Third, I claim that social robots are parasitic to deeply ingrained human social behaviour, in the sense that they exploit and feed upon processes and mechanisms that evolved for purposes that were originally completely alien to human-computer interactions.

ACS Style

Henrik Skaug Sætra. The Parasitic Nature of Social AI: Sharing Minds with the Mindless. Integrative Psychological and Behavioral Science 2020, 54, 308 -326.

AMA Style

Henrik Skaug Sætra. The Parasitic Nature of Social AI: Sharing Minds with the Mindless. Integrative Psychological and Behavioral Science. 2020; 54 (2):308-326.

Chicago/Turabian Style

Henrik Skaug Sætra. 2020. "The Parasitic Nature of Social AI: Sharing Minds with the Mindless." Integrative Psychological and Behavioral Science 54, no. 2: 308-326.

Chapter
Published: 12 December 2019 in Memories of Gustav Ichheiser
Reads 0
Downloads 0

Causality is a thorny issue in debates about explanation, and a quote from an introductory textbook on economics may serve as an example on how it is sometimes used. It states that “higher interest rates cause people to save more” (Lipsey, Chrystal, Economics. Oxford University Press, Oxford, p 15, 2004). How do higher interest rates cause this change in people? Exactly how are we to explain this? This chapter is a constructive expansion of the contributions by Malnes, Valsiner, and Zickfeld and Schubert to the volume. For most people, and even for scientists and philosophers at times, there is actually a world out there. That a stone actually is what they perceive it to be is self-evident, and so is the fact that people around them both exist and function pretty much like themselves. They can easily describe these people and explain their constitutive characteristics. People drive their cars to work, and if questioned, they’ll explain to us that their cars are able to do what they do because they put fuel – or electricity – in them, which is then used to create various reactions in the engine that propels the car forward or backward. When asked why they go to work, they’ll explain that they do so for various reasons, such as the need for money, their love of what they do, the importance of what they do, or perhaps just the fact that they want to get some time away from those that remain at home all day. The funny thing is, once we go deeper into the concept of explanation in the social sciences, there is little that remains self-evident.

ACS Style

Henrik Skaug Sætra. Explaining Social Phenomena: Emergence and Levels of Explanation. Memories of Gustav Ichheiser 2019, 169 -185.

AMA Style

Henrik Skaug Sætra. Explaining Social Phenomena: Emergence and Levels of Explanation. Memories of Gustav Ichheiser. 2019; ():169-185.

Chicago/Turabian Style

Henrik Skaug Sætra. 2019. "Explaining Social Phenomena: Emergence and Levels of Explanation." Memories of Gustav Ichheiser , no. : 169-185.

Journal article
Published: 08 July 2019 in Technology in Society
Reads 0
Downloads 0

‘Big Brother is watching you!’ the posters in Orwell's Oceania told all its inhabitants. We have no such posters, but we live in the era of Big Data, and someone is watching us. Here, I discuss how Big Data is an omniscient and ubiquitous presence in our society. I then examine to what degree Big Data threatens liberty in both the negative and positive conception of the term. I arrive at three propositions: a) Big Data threatens privacy and enables surveillance, b) the lack of alternatives to lifestyles that involve feeding into Big Data leads to something akin to forced participation in the surveillance of Big Brother, and c) surveillance and lack of privacy are a threat to freedom, because i) the information gathered can be abused, ii) people have a right not to be observed (even if the surveillance is completely benign), and iii) being observed is an intervention that can affect those who are observed. Together, these propositions lead to the conclusion that Big Data threatens liberty. I argue that the positive conception of liberty provides the strongest argument against how we currently employ Big Data, but that the negative conception can also provide a sufficiently strong argument. On this basis, a liberal defence of privacy, and thus also of liberty, against this new form of surveillance can be established.

ACS Style

Henrik Skaug Sætra. Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data. Technology in Society 2019, 58, 101160 .

AMA Style

Henrik Skaug Sætra. Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data. Technology in Society. 2019; 58 ():101160.

Chicago/Turabian Style

Henrik Skaug Sætra. 2019. "Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data." Technology in Society 58, no. : 101160.

Journal article
Published: 08 July 2019 in Technology in Society
Reads 0
Downloads 0

Never before have we had access to as much information as we do today, but how do we avail ourselves of it? In parallel with the increase in the amount of information, we have created means of curating and delivering it in sophisticated ways, through the technologies of algorithms, Big Data and artificial intelligence. I examine how information is curated, and how digital technology has led to the creation of filter bubbles, while simultaneously creating closed online spaces in which people of similar opinions can congregate – echo chambers. These phenomena partly stem from our tendency towards selective exposure – a tendency to seek information that supports pre-existing beliefs, and to avoid unpleasant information. This becomes a problem when the information and the suggestions we receive, and the way we are portrayed creates expectations, and thus becomes leading. When the technologies I discuss are employed as they are today, combined with human nature, they pose a threat to liberty by undermining individuality, autonomy and the very foundation of liberal society. Liberty is an important part of our image of the good society, and this article is an attempt to analyse one way in which applications of technology can be detrimental to our society. While Alexis De Tocqueville feared the tyranny of the majority, we would do well to fear the tyranny of the algorithms and perceived opinion.

ACS Style

Henrik Skaug Sætra. The tyranny of perceived opinion: Freedom and information in the era of big data. Technology in Society 2019, 59, 101155 .

AMA Style

Henrik Skaug Sætra. The tyranny of perceived opinion: Freedom and information in the era of big data. Technology in Society. 2019; 59 ():101155.

Chicago/Turabian Style

Henrik Skaug Sætra. 2019. "The tyranny of perceived opinion: Freedom and information in the era of big data." Technology in Society 59, no. : 101155.

Journal article
Published: 26 April 2019 in Technology in Society
Reads 0
Downloads 0

In this article, I examine how nudging powered by Big Data relates to both negative and positive liberty. I focus in particular on how liberty is affected by appeals to irrational mechanisms. I conclude that it is problematic to use liberty as an argument for nudging. Such an argument would have to be based on the concept of positive liberty, empowerment and emancipation from irrationality, but I argue that even stronger arguments against nudging can be built on the same conception of liberty. I consider Big Data-powered nudging to have the potential to be both manipulative and coercive, and believe that we should be wary of the effects such efforts have on liberty. As I consider liberty to be part of what makes a good society, this becomes an effort to analyse one aspect of the effects of technology on society in general. While I do not accept arguments in favour of nudging based on liberty, it is easier to see that arguments based on utility could support nudging. I do not evaluate what the proper trade-off is between utility and liberty in this article, and it is obvious that, at times, utility trumps an absolute demand for liberty. However, I argue in favour of transparent traditional regulation and rational persuasion instead of nudging, when these approaches can serve the same purposes. Should we choose to nudge, we should not euphemise our efforts by claiming that we do so on behalf of freedom.

ACS Style

Henrik Skaug Sætra. When nudge comes to shove: Liberty and nudging in the era of big data. Technology in Society 2019, 59, 101130 .

AMA Style

Henrik Skaug Sætra. When nudge comes to shove: Liberty and nudging in the era of big data. Technology in Society. 2019; 59 ():101130.

Chicago/Turabian Style

Henrik Skaug Sætra. 2019. "When nudge comes to shove: Liberty and nudging in the era of big data." Technology in Society 59, no. : 101130.

Arena of changing
Published: 19 September 2018 in Human Arenas
Reads 0
Downloads 0

Human beings have used technology to improve their efficiency throughout history. We continue to do so today, but we are no longer only using technology to perform physical tasks. Today, we make computers that are smart enough to challenge, and even surpass, us in many areas. Artificial intelligence—embodied or not—now drive our cars, trade stocks, socialise with our children, keep the elderly company and the lonely warm. At the same time, we use technology to gather vast amounts of data on ourselves. This, in turn, we use to train intelligent computers that ease and customise ever more of our lives. The change that occurs in our relations to other people, and computers, change both how we act and how we are. What sort of challenges does this development pose for human beings? I argue that we are seeing an emerging challenge to the concept of what it means to be human, as (a) we struggle to define what makes us special and try to come to terms with being surpassed in various ways by computers, and (b) the way we use and interact with technology changes us in ways we do not yet fully understand.

ACS Style

Henrik Skaug Sætra. The Ghost in the Machine. Human Arenas 2018, 2, 60 -78.

AMA Style

Henrik Skaug Sætra. The Ghost in the Machine. Human Arenas. 2018; 2 (1):60-78.

Chicago/Turabian Style

Henrik Skaug Sætra. 2018. "The Ghost in the Machine." Human Arenas 2, no. 1: 60-78.

Regular article
Published: 05 July 2018 in Integrative Psychological and Behavioral Science
Reads 0
Downloads 0

We now live in the era of big data, and according to its proponents, big data is poised to change science as we know it. Claims of having no theory and no ideology are made, and there is an assumption that the results of big data are trustworthy because it is considered free from human judgement, which is often considered inextricably linked with human error. These two claims lead to the idea that big data is the source of better scientific knowledge, through more objectivity, more data, and better analysis. In this paper I analyse the philosophy of science behind big data and make the claim that the death of many traditional sciences, and the human scientist, is much exaggerated. The philosophy of science of big data means that there are certain things big data does very well, and some things that it cannot do. I argue that humans will still be needed for mediating and creating theory, and for providing the legitimacy and values science needs as a normative social enterprise.

ACS Style

Henrik Skaug Sætra. Science as a Vocation in the Era of Big Data: the Philosophy of Science behind Big Data and humanity’s Continued Part in Science. Integrative Psychological and Behavioral Science 2018, 52, 508 -522.

AMA Style

Henrik Skaug Sætra. Science as a Vocation in the Era of Big Data: the Philosophy of Science behind Big Data and humanity’s Continued Part in Science. Integrative Psychological and Behavioral Science. 2018; 52 (4):508-522.

Chicago/Turabian Style

Henrik Skaug Sætra. 2018. "Science as a Vocation in the Era of Big Data: the Philosophy of Science behind Big Data and humanity’s Continued Part in Science." Integrative Psychological and Behavioral Science 52, no. 4: 508-522.