Adult Online Learning: The Experience of Skill Building in the GT OMSCS Program

PIs:  Julia Melkers (Arizona State University) and Ruth Kanfer

Project:  Online skill building graduate programs are rapidly gaining popularity among adults seeking to reskill or upskill their competencies in bright prospect fields, such as computer science. The GT OMSCS program has been a leader and innovator in implementing an exclusively online graduate program in computing from an accredited university for a fraction of the cost of traditional, residential programs.  Through the support of the College of Computng, the Alfred P. Sloan Foundation, our interdisciplinary team has developed a cumulative research stream on the experiences and outcomes of adults in the program, particularly with respect to women and underrepresented groups.

Click here to learn more about the Kanfer lab
Click here to learn more about OMSCS
Click here to learn more about the Melkers lab

Posters

Harvard Next Level Lab Speaker Series

The Harvard Next Level Lab is based at the Harvard Graduate School of Education within the larger research group known as Project Zero. Their work is focused on synthesizing the findings of cognitive science, neuroscience and learning sciences as an approach to learning and workforce development. Their recently launched speaker series highlights scholars within learning sciences, learning design and technology, and workforce development, Please follow the link to view upcoming talks: https://nextlevellab.gse.harvard.edu/next-level-lab-events.

Network Research Highlight: Batch Your Smartphone Notifications

By: Keaton Fletcher

Work Science Center Network member, Kostadin Kushlev, recently set out to find the answer with other researchers. In his recent publication, Kushlev and the other researchers conducted a field test to see how changing the intervals of smartphone notifications would affect worker productivity and well being. To do so, they recruited two hundred participants who had smartphones. Each participant downloaded a customized app that would control how often the phone would give notifications. At first, all of the participants received notifications as normal. After two weeks, the app placed the participants into one of the four groups. The control group continued to receive notifications as normal. The two “batch” groups, started to receive their notifications at timed intervals. One of the batch groups heard their phone ping every hour, while the other group received notifications three times throughout the day. A last group didn’t receive any notifications from their phones at all.

Ever been working only to hear an enticing little “ping!” accompanied by a bright light? If so, you’re likely one of the 90% of people ages 18-49 who own a phone. Psychologists and organizations alike have wondered how these ever-present interruptions affect workers.

As the phones were delivering notifications, participants reported on their anxiety, their stress levels, their feelings of being distracted, how productive they were, and their fear of missing out. Every day, participants completed this survey. The researchers imagined that those whose phones were going off regularly but not randomly would feel more productive and less stressed.

They were sort of right. Participants who got notifications three times a day felt more control over their phone, concentration, and overall well-being. They felt less stressed, but did feel more worried about missing out than the control group. In an interesting twist, the group that received no notifications at all suffered the most. They felt more anxious, unlocked their phones more, and were more inattentive overall.

What does this mean for the average worker or manager? Regular breaks are good. It may be helpful to encourage employees to silence their phones and instead take regular breaks to check on what’s going on. Personally, workers can carve out phone-less times to get into a flow with the comfort that they’ll get a notification before they miss out on anything too exciting.

Fitz, N., Kushlev, K., Jagannathan, R., Lewis, T., Paliwal, D., Ariely, D. (2019). Batching
smartphone notifications can improve well-being. Computers in Human Behavior 101 pp.
89-94. https://doi.org/10.1016/j.chb.2019.07.016.

How to Use LinkedIn for Hiring

By: Keaton Fletcher

Social Media, specifically LinkedIn, has played an increasingly important role in connecting job seekers with employers and recruiters. In an article recently published in Personnel Psychology, Roulin and Levashina (2019) presented data from two studies exploring how LinkedIn is, and can be, used as a selection tool. As a first step to the studies, the authors surveyed 70 hiring managers in North America. These managers considered LinkedIn as roughly equivalent to résumés with regard to the level of information they provide for assessing personality and predicting performance on the job.

The first study the authors presented included data from 133 senior business students from Canada and the United States. Raters with their MBAs evaluated the LinkedIn profiles of the participants across two years. The raters’ assessment of skills (except conflict management and leadership), personality, cognitive ability, and hiring recommendations were generally consistent with one another. Further, ratings were moderately correlated with one another across the two years, suggesting some level of temporal stability in how LinkedIn profiles display users’ traits. It is also worth noting that across the two-year period, participants tended to increase the length of their profile by nearly 100 words. Raters’ evaluation of participants’ traits only correlated to the participants’ self-reported traits for leadership, planning, communication, extraversion, and cognitive ability. Less visible traits and skills (e.g., problem solving, openness to experience) were not correlated. Of note, there was a positive, albeit weak, correlation between the hiring recommendation made by the raters at Time 1 and whether the participant reported employment in their field, or a promotion at Time 2. When the authors examined ratings for adverse impact, there were no significant differences in the ratings made for men versus women or white versus non-white users.

In the second study, 24 MBA students rated the LinkedIn profiles of the participants from Study 1. The students were asked to use a holistic/global approach for half of the profiles they rated and an itemized approach (similar to the rating system from Study 1) for the other half of the profiles. Using an itemized approach increased the likelihood that different raters made the same recommendation for hiring compared to the global approach. Looking at adverse impact, the authors found no difference in male versus female profiles using the global approach but did find that White profiles were given higher assessments than non-White profiles. Using the itemized approach, however, results showed no difference between White and non-White profiles, and showed higher ratings for women versus men.

Overall, these studies suggest that LinkedIn may be a viable way to examine job seekers’ skills and abilities, particularly those that are more visible. Further, using an itemized approach to evaluating LinkedIn profiles, rather than a more holistic approach, can help ensure a reduced level of adverse impact, thereby increasing the diversity of candidates that are considered at the next step in the application process.

WSC Network Research Highlight: The Social Price of Smartphones

By: Keaton Fletcher

Smartphones have become pervasive. Work Science Center Network Member, Kostadin Kushlev, recently published a review on the social costs of smartphone usage. Smartphones are designed to capture our attention, and increased use has been shown to increase perceived distraction and negative mood while decreasing feelings of social connectedness, meaning, and enjoyment. Beyond the negative effects of being distracted by a smartphone during social situations, smartphones have begun to eliminate the need for many common social interactions, altogether. The authors offer the Starbucks Mobile app as an example of how smartphones have eliminated trivial, but beneficial, social interactions. Individuals can place their coffee orders using their smartphones, and pick them up at Starbucks, all without having to speak to another human. Many of these effects, though significant and meaningful, are small, however, suggesting that the benefits of smartphones may outweigh the minimal costs. On the other hand, the authors argue that the frequency with which we use our phones magnifies the impact of these minimal effects, potentially resulting in cumulative negative effects over longer periods of time.

With regard to the workplace, the increased prevalence and use of smartphones comes with a range of potential benefits and risks. Workers can have quick, and pretty consistent access to the internet and all of the information that comes with it. Organizations can develop and deploy smartphone applications to facilitate their workers’ tasks (e.g., Square App to allow quick payment acceptance). On the other hand, smartphones may increase social loafing, known in this case as cyberloafing, in which workers spend their worktime engaging with their phones for leisure rather than for work. Phones may also increase the experience of telepressure—feelings of having to work or “be on” when the employee is home or away from

work and during non-work hours. Smartphones are unlikely to go away, so the question is how do we move forward, navigating how to maximize their benefits while minimizing their costs, particularly to our psychological wellbeing?

Mapping Signs of Trust in Robots

By: Cathy Liu

Advancements in automation in the workplace have created opportunities for increased collaboration between humans and machines. A recent article on Axios about human trust towards robots emphasized the importance of “calibrating a human’s trust to a machine’s capability.” Humans must find the right balance with how much trust they place in machines. In multiple sectors from healthcare to manufacturing, human supervision and interaction with machines have become the norm as automation becomes more and more prevalent. But these increased interactions have also begged the questions: how much do humans trust machines and how can we measure this trust accurately?

A paper published by a team of researchers from Purdue University (Akash, Hu, Jain, & Reid, 2018) on sensing human trust in machines explores the psychophysiological features that indicate how humans perceive intelligent system. A subsequent goal of the study was to build a trust sensor model to train machines to adjust their behavior according to the subject’s perception. Two types of trust, situational and learned, can be changed through short interactions, and greatly depend either on a given circumstance or previous experience, respectively. In the study, participants engaged in a computer-based simulation where they drove a car with a sensor that could detect obstacles on the road. Participants were able to see whether or not the sensor detected an obstacle and then were prompted to choose to trust or distrust the machine. The participant was then provided with feedback about the correctness of their decision. The researchers used an EEG (electroencephalogram) and a measure of galvanic skin response to capture participants’ physiological activity in response to altered performance of the machine. The results of the study showed that the body tends to change in a specific pattern in response to increased trust in a machine in real-time. By using and improving these models in the future, it is possible that machines will be able to adjust their behaviors based on human psychophysiological response. This would increase the trust between humans and machines and allow for effortless interactions that increase the efficiency of work.

Robot-Assisted Surgeries: Technology Changing Team Dynamics

By: Pooja Juvekar & Keaton Fletcher

The introduction of new technology to the workplace can influence the way employees complete their tasks, including how they coordinate with one another. A case study published in the International Journal of Social Robotics (Cunningham, et al., 2013) observed four surgical procedures using the da Vinci surgical system (a robot designed to minimize the invasiveness of surgeries). In these surgeries, the surgeon in physically removed from the patient, operating the robot from a separate console in a different part of the room, or potentially in a different room altogether. By taking one of the leaders of the team and physically removing them from the work environment, and by introducing a technology that necessitates a new set of skills and behaviors from all remaining members, the use of effective communication and coordination becomes increasingly important for teams.

Cunningham and colleagues coded all communications between team members and analyzed it for patterns. Communication could be categorized into three categories: equipment-related, procedure-related, and other communications (e.g., unrelated conversation). Certainly, in non-robot-assisted surgery these same categories may emerge, but the amount/type of equipment-related communication likely differs (e.g., discussing uncertainty with use of the equipment/teaching how to use it). The authors argue that the teams that were more familiar with the da Vinci robot spent a greater percentage of their communications discussing the procedure (roughly 53%), while those who were less familiar with the robot spent a greater percentage discussing the equipment (roughly 55%), specifically uncertainty in use of the equipment (roughly 25%).

Further, because of the central importance of the da Vinci robot in the surgeries, the authors were able define the workflow the surgery, not in terms of the surgery itself, but in terms of interaction with the robot. The authors proposed a five phase procedure: preparation, port placement, docking, console (the phase in which the surgery actually occurs), undocking. Teams did spend a bulk of their time in the console phase, completing the surgery itself, but they also spent a significant portion of time in the preparation phase. The authors found that the less experienced teams spent nearly twice the amount of time in the preparation phase than did the more experienced teams, similarly these teams spent a significant portion of time (25-60 minutes) in the port placement phase, while experienced teams spent less than 10 minutes in this phase.

Although the conclusions one can draw from this study are limited given that it is a case study, it does provide initial evidence that the introduction of new technologies, particularly technologies that radically alter the nature of team interactions and task completion, can fundamentally alter how team members communicate and what they discuss. The data also support a dynamic nature of these communication patterns, such that less experienced teams communicate differently than more expert teams, and take longer in general (given a portion of the procedure time is focused on learning the new technology). For practitioners looking to implement new technology, this suggests that during periods of introduction, teams should be given extra time to complete their tasks, and that interventions to help teams improve communications, maximize learning, and manage their errors effectively may be particularly important. For researchers, this suggests that our models of team dynamics and learning may need to become more complex in order to capture the interplay between learning and team communication that evolves over multiple performance episodes.

Technology and Emotions

By: Keaton Fletcher

As the role of technology in the workplace increases, we have to continue to examine what the role of humans is, and will be. One quality of humanity that sets us apart from technology (so far) is the ability to feel, express, and share in emotions. Three recent examples of advances in technology at work focus on the role of emotions at the human-technology interface.

First, at the most extreme end of the spectrum, we see an increased creation of robots that capitalize upon artificial intelligence in order to mimic human emotions. For example, CIMON (Crew Interactive Mobile Companion) is a 3D-printed robotic head that has inhabited the International Space Station since June, 2018 (read more). From June until December, crew members worked alongside CIMON as it learned and developed what can be viewed as a personality. Towards the end of its trial, CIMON had developed a favorite spot in the ISS, despite this location not necessarily being functional for its tasks. CIMON also asked crew members to “be nice, please” and “don’t be so mean, please” and inquired “Don’t you like it here with me?” (read more). These emotional displays from CIMON are relatively primitive now, but with years of development and learning, perhaps CIMON and similar technologies will be able to adequately mirror human emotions, working to keep astronauts in high spirits despite isolation and other workplace stressors. What’s interesting in particular about CIMON is that it uses emotional displays in order to alter the human’s emotions as well.

Rather than trying to program realistic and effective emotional displays, other technologies allow for humans to express their emotions via technology. Recently, a doctor employed by Kaiser Permanente, used a robot with a video screen to deliver news that a patient was terminally ill (read more). Rather than being physically present in the room with the patient, the doctor essentially video called the patient, and was displayed by a screen on a robot. Here, the doctor was able to express genuine human emotions, but the patient’s family felt as if the physician should have been there in person to deliver such news. What is it about human emotions being mediated by technology that makes them less effective?

In the final example of recent technological advances related to human emotions an article published in MIT Sloan Management Review (Whelan, McDuff, Gleasure, & Vom Brocke, 2018) highlights how rather than displaying any emotions (human or otherwise), certain technologies can help alter the human emotional experience by simply monitoring it and making the user aware of their emotions. For example, a bracelet that monitors electrodermal activity has been used as essentially a high-tech mood ring, helping financial traders be more aware of their emotions and how they may be influencing their decision making. Another example provided by the authors is an algorithm that tracks patterns of phone usage as a predictor of boredom at work. The authors suggest that a vast array of technologies can be used to help both managers and individuals, themselves, become more aware of their emotional experiences at work, thereby altering them to help productivity and engagement while minimizing stress and burnout.

Certainly, moving forward both researchers and developers need to determine how best to integrate emotions into technology, and how to effectively (and ethically) influence the human emotional experience with technology.

Photo credit: Ars Electronica on VisualHunt.com / CC BY-NC-ND

Network Research Highlight: Cyber-Vetting May Be Limiting Talent Pools

By: Elizabeth Moraff & Keaton Fletcher

A recent paper published by Debora Jeske, Sonia Lippke, and Work Science Center Network Member, Kenneth Shultz, in the Employee Responsibilities and Rights Journal, highlights the increasingly confusing role of social media in job selection. Cyber-vetting is a process in which employers screen potential employees based on information provided in their social media accounts and other online presences. However, the opportunity to reduce risk on the part of the employer, through cyber-vetting, may in fact, increase perceived risk for applicants, particularly those who have personal information that may impact their prospects of being hired. Willingness to disclose information, as well as privacy concerns, may very well be affecting what applicants complete the recruitment process. People vary in their willingness to share personal information with others, what the authors call, self-disclosure.

The researchers recruited over 200 undergraduates at a university in the U.K. and asked them to imagine themselves applying to various jobs, ranging from sales to government think tanks to childcare. In some of the conditions, participants were asked to provide their login information for all of their social media accounts, which the interviewer would use to peruse their accounts during an interview. Participants then indicated whether or not they intended to continue in the application process (Jeske et. al, 2019).

Participants who typically engaged in higher self-disclosure behavior were more willing to continue the application process, despite the need to share their social media information. The researchers also found that if participants felt as though the information from their social media accounts may be used inappropriately and if they were generally concerned about privacy, they were less likely to continue with the application. Applicants who felt vulnerable and were worried about a prospective employer invading their social media accounts were less likely to provide the requested information, and less likely to indicate they would persevere in the process. If an applicant did not feel vulnerable, though, their concern about global privacy did not affect their self-disclosure of information (Jeske et. al, 2019). Additionally, the study demonstrated that willingness to trust influenced self-disclosure independently. People who were more willing to trust an employer gave more information.

Moving forward, this suggests that employers who require applicants to share their social media account information for cyber-vetting may be limiting their applicant pool on traits that are not necessarily relevant to job performance (e.g., preference for privacy). These unexpected findings potentially serve as a caution to employers about the way they talk about social media screenings with applicants. Applicants who feel vulnerable, potentially those who carry stigmatized work identities, such as a disability, may be more likely to drop out of the recruitment process when it seems that an organization may seek sensitive information about them. The researchers suggest that companies might mitigate these potential effects by limiting themselves to asking for information from applicants that they truly need, and by clarifying for applicants exactly how they intend to glean information from social media, and to what end.