Network Research Highlight: Batch Your Smartphone Notifications

By: Keaton Fletcher

Work Science Center Network member, Kostadin Kushlev, recently set out to find the answer with other researchers. In his recent publication, Kushlev and the other researchers conducted a field test to see how changing the intervals of smartphone notifications would affect worker productivity and well being. To do so, they recruited two hundred participants who had smartphones. Each participant downloaded a customized app that would control how often the phone would give notifications. At first, all of the participants received notifications as normal. After two weeks, the app placed the participants into one of the four groups. The control group continued to receive notifications as normal. The two “batch” groups, started to receive their notifications at timed intervals. One of the batch groups heard their phone ping every hour, while the other group received notifications three times throughout the day. A last group didn’t receive any notifications from their phones at all.

Ever been working only to hear an enticing little “ping!” accompanied by a bright light? If so, you’re likely one of the 90% of people ages 18-49 who own a phone. Psychologists and organizations alike have wondered how these ever-present interruptions affect workers.

As the phones were delivering notifications, participants reported on their anxiety, their stress levels, their feelings of being distracted, how productive they were, and their fear of missing out. Every day, participants completed this survey. The researchers imagined that those whose phones were going off regularly but not randomly would feel more productive and less stressed.

They were sort of right. Participants who got notifications three times a day felt more control over their phone, concentration, and overall well-being. They felt less stressed, but did feel more worried about missing out than the control group. In an interesting twist, the group that received no notifications at all suffered the most. They felt more anxious, unlocked their phones more, and were more inattentive overall.

What does this mean for the average worker or manager? Regular breaks are good. It may be helpful to encourage employees to silence their phones and instead take regular breaks to check on what’s going on. Personally, workers can carve out phone-less times to get into a flow with the comfort that they’ll get a notification before they miss out on anything too exciting.

Fitz, N., Kushlev, K., Jagannathan, R., Lewis, T., Paliwal, D., Ariely, D. (2019). Batching
smartphone notifications can improve well-being. Computers in Human Behavior 101 pp.
89-94. https://doi.org/10.1016/j.chb.2019.07.016.

How to Use LinkedIn for Hiring

By: Keaton Fletcher

Social Media, specifically LinkedIn, has played an increasingly important role in connecting job seekers with employers and recruiters. In an article recently published in Personnel Psychology, Roulin and Levashina (2019) presented data from two studies exploring how LinkedIn is, and can be, used as a selection tool. As a first step to the studies, the authors surveyed 70 hiring managers in North America. These managers considered LinkedIn as roughly equivalent to résumés with regard to the level of information they provide for assessing personality and predicting performance on the job.

The first study the authors presented included data from 133 senior business students from Canada and the United States. Raters with their MBAs evaluated the LinkedIn profiles of the participants across two years. The raters’ assessment of skills (except conflict management and leadership), personality, cognitive ability, and hiring recommendations were generally consistent with one another. Further, ratings were moderately correlated with one another across the two years, suggesting some level of temporal stability in how LinkedIn profiles display users’ traits. It is also worth noting that across the two-year period, participants tended to increase the length of their profile by nearly 100 words. Raters’ evaluation of participants’ traits only correlated to the participants’ self-reported traits for leadership, planning, communication, extraversion, and cognitive ability. Less visible traits and skills (e.g., problem solving, openness to experience) were not correlated. Of note, there was a positive, albeit weak, correlation between the hiring recommendation made by the raters at Time 1 and whether the participant reported employment in their field, or a promotion at Time 2. When the authors examined ratings for adverse impact, there were no significant differences in the ratings made for men versus women or white versus non-white users.

In the second study, 24 MBA students rated the LinkedIn profiles of the participants from Study 1. The students were asked to use a holistic/global approach for half of the profiles they rated and an itemized approach (similar to the rating system from Study 1) for the other half of the profiles. Using an itemized approach increased the likelihood that different raters made the same recommendation for hiring compared to the global approach. Looking at adverse impact, the authors found no difference in male versus female profiles using the global approach but did find that White profiles were given higher assessments than non-White profiles. Using the itemized approach, however, results showed no difference between White and non-White profiles, and showed higher ratings for women versus men.

Overall, these studies suggest that LinkedIn may be a viable way to examine job seekers’ skills and abilities, particularly those that are more visible. Further, using an itemized approach to evaluating LinkedIn profiles, rather than a more holistic approach, can help ensure a reduced level of adverse impact, thereby increasing the diversity of candidates that are considered at the next step in the application process.

WSC Network Research Highlight: The Social Price of Smartphones

By: Keaton Fletcher

Smartphones have become pervasive. Work Science Center Network Member, Kostadin Kushlev, recently published a review on the social costs of smartphone usage. Smartphones are designed to capture our attention, and increased use has been shown to increase perceived distraction and negative mood while decreasing feelings of social connectedness, meaning, and enjoyment. Beyond the negative effects of being distracted by a smartphone during social situations, smartphones have begun to eliminate the need for many common social interactions, altogether. The authors offer the Starbucks Mobile app as an example of how smartphones have eliminated trivial, but beneficial, social interactions. Individuals can place their coffee orders using their smartphones, and pick them up at Starbucks, all without having to speak to another human. Many of these effects, though significant and meaningful, are small, however, suggesting that the benefits of smartphones may outweigh the minimal costs. On the other hand, the authors argue that the frequency with which we use our phones magnifies the impact of these minimal effects, potentially resulting in cumulative negative effects over longer periods of time.

With regard to the workplace, the increased prevalence and use of smartphones comes with a range of potential benefits and risks. Workers can have quick, and pretty consistent access to the internet and all of the information that comes with it. Organizations can develop and deploy smartphone applications to facilitate their workers’ tasks (e.g., Square App to allow quick payment acceptance). On the other hand, smartphones may increase social loafing, known in this case as cyberloafing, in which workers spend their worktime engaging with their phones for leisure rather than for work. Phones may also increase the experience of telepressure—feelings of having to work or “be on” when the employee is home or away from

work and during non-work hours. Smartphones are unlikely to go away, so the question is how do we move forward, navigating how to maximize their benefits while minimizing their costs, particularly to our psychological wellbeing?

Mapping Signs of Trust in Robots

By: Cathy Liu

Advancements in automation in the workplace have created opportunities for increased collaboration between humans and machines. A recent article on Axios about human trust towards robots emphasized the importance of “calibrating a human’s trust to a machine’s capability.” Humans must find the right balance with how much trust they place in machines. In multiple sectors from healthcare to manufacturing, human supervision and interaction with machines have become the norm as automation becomes more and more prevalent. But these increased interactions have also begged the questions: how much do humans trust machines and how can we measure this trust accurately?

A paper published by a team of researchers from Purdue University (Akash, Hu, Jain, & Reid, 2018) on sensing human trust in machines explores the psychophysiological features that indicate how humans perceive intelligent system. A subsequent goal of the study was to build a trust sensor model to train machines to adjust their behavior according to the subject’s perception. Two types of trust, situational and learned, can be changed through short interactions, and greatly depend either on a given circumstance or previous experience, respectively. In the study, participants engaged in a computer-based simulation where they drove a car with a sensor that could detect obstacles on the road. Participants were able to see whether or not the sensor detected an obstacle and then were prompted to choose to trust or distrust the machine. The participant was then provided with feedback about the correctness of their decision. The researchers used an EEG (electroencephalogram) and a measure of galvanic skin response to capture participants’ physiological activity in response to altered performance of the machine. The results of the study showed that the body tends to change in a specific pattern in response to increased trust in a machine in real-time. By using and improving these models in the future, it is possible that machines will be able to adjust their behaviors based on human psychophysiological response. This would increase the trust between humans and machines and allow for effortless interactions that increase the efficiency of work.

Robot-Assisted Surgeries: Technology Changing Team Dynamics

By: Pooja Juvekar & Keaton Fletcher

The introduction of new technology to the workplace can influence the way employees complete their tasks, including how they coordinate with one another. A case study published in the International Journal of Social Robotics (Cunningham, et al., 2013) observed four surgical procedures using the da Vinci surgical system (a robot designed to minimize the invasiveness of surgeries). In these surgeries, the surgeon in physically removed from the patient, operating the robot from a separate console in a different part of the room, or potentially in a different room altogether. By taking one of the leaders of the team and physically removing them from the work environment, and by introducing a technology that necessitates a new set of skills and behaviors from all remaining members, the use of effective communication and coordination becomes increasingly important for teams.

Cunningham and colleagues coded all communications between team members and analyzed it for patterns. Communication could be categorized into three categories: equipment-related, procedure-related, and other communications (e.g., unrelated conversation). Certainly, in non-robot-assisted surgery these same categories may emerge, but the amount/type of equipment-related communication likely differs (e.g., discussing uncertainty with use of the equipment/teaching how to use it). The authors argue that the teams that were more familiar with the da Vinci robot spent a greater percentage of their communications discussing the procedure (roughly 53%), while those who were less familiar with the robot spent a greater percentage discussing the equipment (roughly 55%), specifically uncertainty in use of the equipment (roughly 25%).

Further, because of the central importance of the da Vinci robot in the surgeries, the authors were able define the workflow the surgery, not in terms of the surgery itself, but in terms of interaction with the robot. The authors proposed a five phase procedure: preparation, port placement, docking, console (the phase in which the surgery actually occurs), undocking. Teams did spend a bulk of their time in the console phase, completing the surgery itself, but they also spent a significant portion of time in the preparation phase. The authors found that the less experienced teams spent nearly twice the amount of time in the preparation phase than did the more experienced teams, similarly these teams spent a significant portion of time (25-60 minutes) in the port placement phase, while experienced teams spent less than 10 minutes in this phase.

Although the conclusions one can draw from this study are limited given that it is a case study, it does provide initial evidence that the introduction of new technologies, particularly technologies that radically alter the nature of team interactions and task completion, can fundamentally alter how team members communicate and what they discuss. The data also support a dynamic nature of these communication patterns, such that less experienced teams communicate differently than more expert teams, and take longer in general (given a portion of the procedure time is focused on learning the new technology). For practitioners looking to implement new technology, this suggests that during periods of introduction, teams should be given extra time to complete their tasks, and that interventions to help teams improve communications, maximize learning, and manage their errors effectively may be particularly important. For researchers, this suggests that our models of team dynamics and learning may need to become more complex in order to capture the interplay between learning and team communication that evolves over multiple performance episodes.

Technology and Emotions

By: Keaton Fletcher

As the role of technology in the workplace increases, we have to continue to examine what the role of humans is, and will be. One quality of humanity that sets us apart from technology (so far) is the ability to feel, express, and share in emotions. Three recent examples of advances in technology at work focus on the role of emotions at the human-technology interface.

First, at the most extreme end of the spectrum, we see an increased creation of robots that capitalize upon artificial intelligence in order to mimic human emotions. For example, CIMON (Crew Interactive Mobile Companion) is a 3D-printed robotic head that has inhabited the International Space Station since June, 2018 (read more). From June until December, crew members worked alongside CIMON as it learned and developed what can be viewed as a personality. Towards the end of its trial, CIMON had developed a favorite spot in the ISS, despite this location not necessarily being functional for its tasks. CIMON also asked crew members to “be nice, please” and “don’t be so mean, please” and inquired “Don’t you like it here with me?” (read more). These emotional displays from CIMON are relatively primitive now, but with years of development and learning, perhaps CIMON and similar technologies will be able to adequately mirror human emotions, working to keep astronauts in high spirits despite isolation and other workplace stressors. What’s interesting in particular about CIMON is that it uses emotional displays in order to alter the human’s emotions as well.

Rather than trying to program realistic and effective emotional displays, other technologies allow for humans to express their emotions via technology. Recently, a doctor employed by Kaiser Permanente, used a robot with a video screen to deliver news that a patient was terminally ill (read more). Rather than being physically present in the room with the patient, the doctor essentially video called the patient, and was displayed by a screen on a robot. Here, the doctor was able to express genuine human emotions, but the patient’s family felt as if the physician should have been there in person to deliver such news. What is it about human emotions being mediated by technology that makes them less effective?

In the final example of recent technological advances related to human emotions an article published in MIT Sloan Management Review (Whelan, McDuff, Gleasure, & Vom Brocke, 2018) highlights how rather than displaying any emotions (human or otherwise), certain technologies can help alter the human emotional experience by simply monitoring it and making the user aware of their emotions. For example, a bracelet that monitors electrodermal activity has been used as essentially a high-tech mood ring, helping financial traders be more aware of their emotions and how they may be influencing their decision making. Another example provided by the authors is an algorithm that tracks patterns of phone usage as a predictor of boredom at work. The authors suggest that a vast array of technologies can be used to help both managers and individuals, themselves, become more aware of their emotional experiences at work, thereby altering them to help productivity and engagement while minimizing stress and burnout.

Certainly, moving forward both researchers and developers need to determine how best to integrate emotions into technology, and how to effectively (and ethically) influence the human emotional experience with technology.

Photo credit: Ars Electronica on VisualHunt.com / CC BY-NC-ND

Network Research Highlight: Cyber-Vetting May Be Limiting Talent Pools

By: Elizabeth Moraff & Keaton Fletcher

A recent paper published by Debora Jeske, Sonia Lippke, and Work Science Center Network Member, Kenneth Shultz, in the Employee Responsibilities and Rights Journal, highlights the increasingly confusing role of social media in job selection. Cyber-vetting is a process in which employers screen potential employees based on information provided in their social media accounts and other online presences. However, the opportunity to reduce risk on the part of the employer, through cyber-vetting, may in fact, increase perceived risk for applicants, particularly those who have personal information that may impact their prospects of being hired. Willingness to disclose information, as well as privacy concerns, may very well be affecting what applicants complete the recruitment process. People vary in their willingness to share personal information with others, what the authors call, self-disclosure.

The researchers recruited over 200 undergraduates at a university in the U.K. and asked them to imagine themselves applying to various jobs, ranging from sales to government think tanks to childcare. In some of the conditions, participants were asked to provide their login information for all of their social media accounts, which the interviewer would use to peruse their accounts during an interview. Participants then indicated whether or not they intended to continue in the application process (Jeske et. al, 2019).

Participants who typically engaged in higher self-disclosure behavior were more willing to continue the application process, despite the need to share their social media information. The researchers also found that if participants felt as though the information from their social media accounts may be used inappropriately and if they were generally concerned about privacy, they were less likely to continue with the application. Applicants who felt vulnerable and were worried about a prospective employer invading their social media accounts were less likely to provide the requested information, and less likely to indicate they would persevere in the process. If an applicant did not feel vulnerable, though, their concern about global privacy did not affect their self-disclosure of information (Jeske et. al, 2019). Additionally, the study demonstrated that willingness to trust influenced self-disclosure independently. People who were more willing to trust an employer gave more information.

Moving forward, this suggests that employers who require applicants to share their social media account information for cyber-vetting may be limiting their applicant pool on traits that are not necessarily relevant to job performance (e.g., preference for privacy). These unexpected findings potentially serve as a caution to employers about the way they talk about social media screenings with applicants. Applicants who feel vulnerable, potentially those who carry stigmatized work identities, such as a disability, may be more likely to drop out of the recruitment process when it seems that an organization may seek sensitive information about them. The researchers suggest that companies might mitigate these potential effects by limiting themselves to asking for information from applicants that they truly need, and by clarifying for applicants exactly how they intend to glean information from social media, and to what end.

Healthcare Goes High-Tech

By: Catherine Liu

Modern healthcare organizations are adapting and innovating in response to the boom in artificial intelligence. A recent paper details two distinct branches of use for artificial intelligence in healthcare: virtual and physical.

The virtual branch encompasses the use of deep learning in information management, management of electronic health records, and guidance of physicians in decision making. The virtual branch focuses on technology that can assist healthcare workers by processing and organizing information so less time is spent on menial tasks that could be completed by a computer. For example, electronic medical records make patient information easily accessible to doctors and nurses and allow for important information to be collectively organized in one location. The virtual branch also includes the many applications of machine learning techniques to imaging technology used by radiologists.

In contrast to the virtual branch, the physical branch focuses on tangible technologies that capitalize upon artificial intelligence in order to complete a set of tasks. This can include nanorobots that assist with drug delivery and robots that are used to assist elderly patients. For example, human-interactive robots can provide assistance, guide, and assist with psychological-enrichment with older patients (Shibata et al., 2010).

Although artificial intelligence holds great promise, there is a myriad of societal and ethical complexities that result from the use of artificial intelligence in healthcare, given concerns over reliability, safety, and accountability. As detailed at the Nuffield Council on Bioethics, artificial intelligence currently has many limitations in the medical field. For example, artificial intelligence is reliant on large amounts of data in order to learn how to behave, but the current availability and quality of medical data may not be sufficient for this purpose. Artificial intelligence may also propagate inequalities in healthcare if trained on biased data and may negatively affect patients. For example, a recent study found that men and women receive different treatment after heart attacks. Thus, if the training data did not account for this difference and included primarily male patients, the treatment suggestions given by the artificial intelligence program would be biased and thus may negatively affect female patients. On a practical note, artificial intelligence is limited by computing power, so the large, complex datasets inherent to healthcare may present a challenge, particularly for those organizations that do not have the financial resources to purchase and maintain computers capable of these calculations. Lastly, artificially intelligent systems may lack the empathy or ability to process a complex situation in order to ensure the correct suggestions for what further treatments should be pursued, as in the case of palliative care.

Rather than using artificial intelligence independently or completely abandoning it, combining the predictions made from machine learning algorithms with the expertise and empathy of healthcare providers may allow for better, more comprehensive treatment overall as we head into the future of modern healthcare.

Millennial cyberloafing: Why it’s costly & how to approach the problem

By: Jacqueline Jung

With access to technology and the internet nearly ubiquitous in the modern workforce, organizations are struggling with a relatively new phenomenon: cyberloafing. Cyberloafing is the use of technology at work for non-work-related purposes (e.g., checking social media, watching YouTube videos). Cyberloafing may reduce productivity and has been estimated to cost U.S. organizations $85 billion annually (Zakrzewski, 2016). On the other hand, employees born between 1981 and 1995 (i.e., Millenials), grew up with the internet and constant access to technology, and may, to some extent, expect to have this continued liberty at work. The question then remains: how can organizations mitigate the negative effects of cyberloafing while still attracting and retaining millennials, who will soon make up the majority of the U.S. workforce?

For millennials, technology may be viewed as inseparable from communication and entertainment; texting is the standard mode of communication, and sporting events, music, and games can all be accessed through a smartphone (PEW Research, 2009). Millennials also prefer to use the internet to learn new information, more so than their colleagues from previous generations who prefer traditional, structured training (Prosperio & Gioia, 2007). Millennials also do not hold the same work values as other generations–they view work as less important to their identity and place a stronger priority on leisure and work-life balance compared to previous generations (Twenge, Campbell, Hoffman & Lance, 2010). Taken together, this suggests that addressing cyberloafing may be particularly challenging when considering Millennial employees.

Two opposing organizational approaches toward cyberloafing organizations are deterrence and laissez-faire. Deterrence policies limit technology use through stringent monitoring and surveillance, while laissez-faire policies encourage little to no interference or surveillance from the company. 66% of firms claim to monitor Internet use at work (American Management Association, 2008), and while regulation may increase productivity, too much can be counterproductive (e.g., Henle, Kohut, Booth, 2009). Deterrence strategies, such as stringent technology use policies may lead to millennials’ erosion of trust in the organization because surveillance is viewed as an indication of distrust, and millennials view technology as a right that should not be blocked (Coker, 2013). Strict monitoring may also be seen as an encroachment upon Millennials’ desire for work-life balance. Therefore, a zero tolerance for personal technology use may make it difficult to attract Millennials to an organization and may increase turnover intentions among Millenials within the organization (e.g., Henle et al., 2009).

A laissez-faire approach, on the other hand, leaves employees susceptible to the myriad of negative outcomes of technological distractions. Henle and colleagues (2009) suggest that technology may reduce individuals’ attention toward their tasks, and cyberloafing may reduce the amount of time individuals have to complete their tasks, thereby increasing employee stress. Ultimately, employees’ unrestricted access to personal technology use may lead to a decline in organizational performance (Raisch, 2009).

There are viable solutions, however. For example, organizations can establish a clear technology use policy and train millennials as well as their managers on both the benefits and drawbacks of personal technology use at work. When seeking to create this policy, organizations should form an internal committee that includes employees in order to reach an agreed-upon and mutually beneficial stance. This may reduce the likelihood that employees will react negatively to the final policy, since they were a part of its creation (Corgnet, Hernan-Gonzalez & McCarter, 2015). Finally, organizations must provide relevant training on policies and best practices to both employees and managers to ensure standardization and compliance.

References

Coker, B. (2013). Workplace internet leisure browsing. Human Performance, 26(2), 114-125.

Corgnet, B., Hernan-Gonzalez R., & McCarter, M. W. (2015). The role of decision-making regime on cooperation in a workgroup social dilemma: An examination of cyberloafing. Games, 6, 588-603.

“Generations Online in 2009.” Pew Research Center, Washington D.C. (January 28, 2009). http://www.pewinternet.org/2009/01/28/generations-online-in-2009/.

Kim, S. (2018). Managing millennials’ personal use of technology at work. Business Horizons, 61(2), 261-270.

Proserpio, L. & Gioia, D. (2007). Teaching the virtual generation. Academy of Management Learning & Education, 6(1), 69-80.

Raisch, S. (2009). Organizational ambidexterity: Balancing exploitation and exploration for sustained performance. Organization Science, 20(4), 685-695.

Twenge, J., Campbell, S., Hoffman, B., & Lance, C. (2010). Generational differences in work values: Leisure and extrinsic values increasing, social and intrinsic values decreasing. Journal of Management, 36(5), 1117-1142.

Zakrzewski, J. L. (2016). Using iPads to your advantage. Mathematics Teaching in the Middle School, 21(8), 480-483.