Healthcare Goes High-Tech

By: Catherine Liu

Modern healthcare organizations are adapting and innovating in response to the boom in artificial intelligence. A recent paper details two distinct branches of use for artificial intelligence in healthcare: virtual and physical.

The virtual branch encompasses the use of deep learning in information management, management of electronic health records, and guidance of physicians in decision making. The virtual branch focuses on technology that can assist healthcare workers by processing and organizing information so less time is spent on menial tasks that could be completed by a computer. For example, electronic medical records make patient information easily accessible to doctors and nurses and allow for important information to be collectively organized in one location. The virtual branch also includes the many applications of machine learning techniques to imaging technology used by radiologists.

In contrast to the virtual branch, the physical branch focuses on tangible technologies that capitalize upon artificial intelligence in order to complete a set of tasks. This can include nanorobots that assist with drug delivery and robots that are used to assist elderly patients. For example, human-interactive robots can provide assistance, guide, and assist with psychological-enrichment with older patients (Shibata et al., 2010).

Although artificial intelligence holds great promise, there is a myriad of societal and ethical complexities that result from the use of artificial intelligence in healthcare, given concerns over reliability, safety, and accountability. As detailed at the Nuffield Council on Bioethics, artificial intelligence currently has many limitations in the medical field. For example, artificial intelligence is reliant on large amounts of data in order to learn how to behave, but the current availability and quality of medical data may not be sufficient for this purpose. Artificial intelligence may also propagate inequalities in healthcare if trained on biased data and may negatively affect patients. For example, a recent study found that men and women receive different treatment after heart attacks. Thus, if the training data did not account for this difference and included primarily male patients, the treatment suggestions given by the artificial intelligence program would be biased and thus may negatively affect female patients. On a practical note, artificial intelligence is limited by computing power, so the large, complex datasets inherent to healthcare may present a challenge, particularly for those organizations that do not have the financial resources to purchase and maintain computers capable of these calculations. Lastly, artificially intelligent systems may lack the empathy or ability to process a complex situation in order to ensure the correct suggestions for what further treatments should be pursued, as in the case of palliative care.

Rather than using artificial intelligence independently or completely abandoning it, combining the predictions made from machine learning algorithms with the expertise and empathy of healthcare providers may allow for better, more comprehensive treatment overall as we head into the future of modern healthcare.

Millennial cyberloafing: Why it’s costly & how to approach the problem

By: Jacqueline Jung

With access to technology and the internet nearly ubiquitous in the modern workforce, organizations are struggling with a relatively new phenomenon: cyberloafing. Cyberloafing is the use of technology at work for non-work-related purposes (e.g., checking social media, watching YouTube videos). Cyberloafing may reduce productivity and has been estimated to cost U.S. organizations $85 billion annually (Zakrzewski, 2016). On the other hand, employees born between 1981 and 1995 (i.e., Millenials), grew up with the internet and constant access to technology, and may, to some extent, expect to have this continued liberty at work. The question then remains: how can organizations mitigate the negative effects of cyberloafing while still attracting and retaining millennials, who will soon make up the majority of the U.S. workforce?

For millennials, technology may be viewed as inseparable from communication and entertainment; texting is the standard mode of communication, and sporting events, music, and games can all be accessed through a smartphone (PEW Research, 2009). Millennials also prefer to use the internet to learn new information, more so than their colleagues from previous generations who prefer traditional, structured training (Prosperio & Gioia, 2007). Millennials also do not hold the same work values as other generations–they view work as less important to their identity and place a stronger priority on leisure and work-life balance compared to previous generations (Twenge, Campbell, Hoffman & Lance, 2010). Taken together, this suggests that addressing cyberloafing may be particularly challenging when considering Millennial employees.

Two opposing organizational approaches toward cyberloafing organizations are deterrence and laissez-faire. Deterrence policies limit technology use through stringent monitoring and surveillance, while laissez-faire policies encourage little to no interference or surveillance from the company. 66% of firms claim to monitor Internet use at work (American Management Association, 2008), and while regulation may increase productivity, too much can be counterproductive (e.g., Henle, Kohut, Booth, 2009). Deterrence strategies, such as stringent technology use policies may lead to millennials’ erosion of trust in the organization because surveillance is viewed as an indication of distrust, and millennials view technology as a right that should not be blocked (Coker, 2013). Strict monitoring may also be seen as an encroachment upon Millennials’ desire for work-life balance. Therefore, a zero tolerance for personal technology use may make it difficult to attract Millennials to an organization and may increase turnover intentions among Millenials within the organization (e.g., Henle et al., 2009).

A laissez-faire approach, on the other hand, leaves employees susceptible to the myriad of negative outcomes of technological distractions. Henle and colleagues (2009) suggest that technology may reduce individuals’ attention toward their tasks, and cyberloafing may reduce the amount of time individuals have to complete their tasks, thereby increasing employee stress. Ultimately, employees’ unrestricted access to personal technology use may lead to a decline in organizational performance (Raisch, 2009).

There are viable solutions, however. For example, organizations can establish a clear technology use policy and train millennials as well as their managers on both the benefits and drawbacks of personal technology use at work. When seeking to create this policy, organizations should form an internal committee that includes employees in order to reach an agreed-upon and mutually beneficial stance. This may reduce the likelihood that employees will react negatively to the final policy, since they were a part of its creation (Corgnet, Hernan-Gonzalez & McCarter, 2015). Finally, organizations must provide relevant training on policies and best practices to both employees and managers to ensure standardization and compliance.

References

Coker, B. (2013). Workplace internet leisure browsing. Human Performance, 26(2), 114-125.

Corgnet, B., Hernan-Gonzalez R., & McCarter, M. W. (2015). The role of decision-making regime on cooperation in a workgroup social dilemma: An examination of cyberloafing. Games, 6, 588-603.

“Generations Online in 2009.” Pew Research Center, Washington D.C. (January 28, 2009). http://www.pewinternet.org/2009/01/28/generations-online-in-2009/.

Kim, S. (2018). Managing millennials’ personal use of technology at work. Business Horizons, 61(2), 261-270.

Proserpio, L. & Gioia, D. (2007). Teaching the virtual generation. Academy of Management Learning & Education, 6(1), 69-80.

Raisch, S. (2009). Organizational ambidexterity: Balancing exploitation and exploration for sustained performance. Organization Science, 20(4), 685-695.

Twenge, J., Campbell, S., Hoffman, B., & Lance, C. (2010). Generational differences in work values: Leisure and extrinsic values increasing, social and intrinsic values decreasing. Journal of Management, 36(5), 1117-1142.

Zakrzewski, J. L. (2016). Using iPads to your advantage. Mathematics Teaching in the Middle School, 21(8), 480-483.

Bruce Walker Interviewed for Inaugural ScienceMatters Podcast

Date: Wednesday, August 22, 2018

Bruce Walker, Professor of Psychology at Georgia Tech, and friend of the Work Science Center was recently interviewed for the inaugural ScienceMatters Podcast at GeorgiaTech. During this interview, he discusses data sonification and ways of making data and results easily accessible to the public

What is the Ideal Robot Teammate’s Personality?

By: Keaton Fletcher

What kind of robot would you want for a teammate? A recent theoretical paper argued that robot personality will influence individuals’ and teams’ motivation. To better understand robot personality, we must first briefly describe personality traits in humans. The most widely accepted model of human personality captures an individual’s general tendencies and preferences within five primary domains: extraversion (i.e., outgoingness and social dominance), neuroticism (i.e., emotional volatility, anxiety), agreeableness (i.e., politeness and preference for social harmony), conscientiousness (i.e., orderliness, detail-oriented, rule-abiding), and openness to experience (i.e., willingness to experience novel and ambiguous situations or stimuli). Generally speaking, we value high levels of extraversion, agreeableness, conscientiousness, and openness to experience, and low levels of neuroticism.

As robots become more advanced, humanoid, and ubiquitous, an understanding of robot personality in teams should help roboticists design and program the ideal robot teammate. Robert Jr. argued that, just like humans, a robot that appears to be high in all of the Big Five personality traits except for neuroticism, would help keep individuals and teams motivated. A study from 2006 found that simply being humanoid (as opposed to more mechanical) in shape, led to robots being perceived as higher in the Big Five (lower in neuroticism). If Robert Jr.’s theory is supported, this would suggest that when creating teams that incorporate robots, it is better to have a humanoid shaped robot than other designs, because this may help teams and individuals set more challenging goals, work harder and longer to achieve these goals, have a stronger belief that they can achieve these goals, which should ultimately improve performance and satisfaction. That said, as artificial intelligence capabilities increase, there may be ways to program the apparent personalities of robots to be more tailored to the situation, like what is seen in the 2014 movie Interstellar, or, less effectively, in the book series, Hitchhiker’s Guide to the Galaxy. 

Automating Fashion

By: Xinyu Chu

Although automation and robotics has long impacted manufacturing jobs, with recent technological advances, even more traditional office jobs are feeling the change. A New York Times article by Noam Schieber discusses the role automation is playing in the fashion industry. For example, the tasks of a fashion buyer, which typically require intuition about changes in the tastes and preferences of customers in order to predict future fashion trends, are beginning to be supplemented, if not replaced, by artificial intelligence. Machine learning has enabled artificial intelligence algorithms to extract profile information about customers, ranging from the items they put in their wishlists to their search histories or occupations, to make better predictions about which items to stock and recommend. Traditionally fashion buyers work in large groups and each buyer focuses on a specific style of clothing, monitoring the possible changes in trends and customer preferences. With the aid of artificial intelligence, a small group of buyers, or even a sole individual, can handle the job. 

Yet, use of artificial intelligence is not without its limitations. For many personalized fashion companies (e.g., StitchFix), although their algorithms can make better predictions of general trends and for each customer, they still require a human touch to collect the data and interact with customers. Many customers do not know exactly what they want, and, at least for now, it takes the expertise of a human consultant to help determine what sorts of input are most relevant for the algorithms. Left unchecked, artificial intelligence can create problems for organizations. For example, t-shirt company, Solid Gold Bomb, used an unchecked algorithm to create thousands of unique t-shirt designs based on the slogan “Keep Calm and Carry On,” replacing carry on with various phrases. Within these thousands of designs, a subset had a range of offensive phrases that no one from the company saw before uploading the options for purchase. Eventually, the company went bankrupt. A little more of a human touch in the process may have prevented these issues and saved the company.

Rather than completely eliminating employees from the workplace, the introduction of automation may simply change the way in which people work and the types of tasks they need to focus on.

Did You Google It? Enterprise Social Media Enhances Autonomous Learning

By: Keaton Fletcher

It has become expected by both employers and employees that jobs will require continued learning over the course of one’s career; enterprise social media is one method that companies can use to facilitate learning. According to a conference paper published by Carine Touré, Christine Michel, and Jean-Charles Marty, enterprise social media is essentially corporately sponsored online forums. These forums capitalize upon the current way most of us search for information outside of work (hint: we Google it). 

Although there are many methods corporations can use to facilitate learning (e.g., formalized training programs, informal mentoring programs), evidence suggests that roughly 75% of learning at work occurs informally. Understanding this, organizations have tried different methods to ensure that employees have access to correct information when they need it. This started as knowledge management systems that functioned as repositories for knowledge. In theory, employees could search these knowledge repositories for the information they needed, when they needed it. However, oftentimes these static knowledge repositories are difficult to use or search, and are left unused, collecting digital or literal dust. 

To address the weaknesses of these static knowledge management systems, organizations turned toward communities of practice. These communities introduced a social aspect to learning, allowed workers to share their experiences, learning from one another. In theory, communities of practice capitalize upon the social nature of humans, allowing the members to identify with their community, motivating them to participate. Touré and colleagues, however, suggest that these fall short of aspirations as well. Often, there are questions about the validity of information provided through communities of practice. Who determines who is eligible for the community, and how do they make this decision? 

More recent attempts to address these issues rely on enterprise social media, to combine the benefits of traditional knowledge management systems and communities of practice. Online forums that connect workers within an organization, or across a discipline, allow for the creation of a large, and easily searchable knowledge repository. They also allow for social interaction through commenting and “liking” or “upvoting” answers or posts. This combination creates a dynamic and engaging way to create opportunities for informal learning on the job. 

To test the design of an enterprise social media intervention, Touré and colleagues worked with a water treatment and distribution company in the south of France. The company originally had a traditional knowledge management system that acted as a catalog of information that was digitally accessible to employees. Employees reported not using this function to learn or to solve their problems. The researchers worked to implement an enterprise social media system for the company; they interviewed employees about how best to do this. Employees wanted comments to be moderated to prevent overuse or abuse, they wanted posts to be labeled with their level of information coverage and their level of utility. The researchers found that users thought the new system was easier to use, but that usage did not actually change because people still felt that they were already experts.

So, what have we learned? Combining digital repositories of knowledge with social media platforms offer companies ways to enhance informal learning. The next steps for companies may be helping employees recognize their own limitations, and the utility of seeking information from these systems.

Reading the IT Leaves: NSF’s ITEST Program & the Future of Work

By: Keaton Fletcher

Technology is clearly changing the entire workforce, but how can workers change to keep up? To help this massive transition, The National Science Foundation sponsored ITEST (Innovative Technology Experiences for Students and Teachers). ITEST works to connect students from prekindergarten through 12th grade with professionals in STEM (science, technology, engineering, and math) careers to help students gain the necessary skills and knowledge for a successful career in the modern workforce.

To put the ITEST program into context, Malyn-Smith and colleagues (2017) published a paper reviewing the current workforce trends given the changes in technology. They argue that disruptive technologies and innovations are changing the status quo. Work will look fundamentally different in this new augmented age, when humans are enhanced by working with machines. To be successful, the modern worker, regardless of career path, industry sector, or job, must become knowledgeable about technology.

Malyn-Smith and colleagues point toward multiple technology-driven changes in the modern workforce to which workers will need to adapt. More and more frequently, workers will have to work in interdisciplinary teams of humans, but these teams will also include machine-based teammates as well. Given the technology-based increase in human capacity, workers will now be able to address problems that were beyond the realm of possibility ten years ago. However, these problems cannot be solved by solitary workers, or even teams of workers all with the same knowledge, skills, and abilities. Malyn-Smith and colleagues argue that the types of projects and problems that will become common in the modern workforce will require teams of individuals with unique skillsets and knowledge bases, all of whom will have to interact with machines in order to complete their tasks.

Workers will also have to become comfortable working with massive amounts of data. As machine learning, artificial intelligence, and automation become more common, the role and accessibility of data, especially big data sets, has become great. Modern workers will need, at the very least, to understand the role of data in problem-solving, how to present data, and also how to protect it.

Many organizations, Malyn-Smith and colleagues suggest, are already relying on informal learning in the workplace, and worker-initiated formal learning (e.g., certificate programs or higher education) during personal time to help address these changing demands. As the nature of work continues to change, the modern worker must be driven by personal interest to learn new skills. Malyn-Smith and colleagues argue that organizations may begin to care less about formal degrees or certifications, instead giving more value to evidence of specific skill sets that can be applicable across a variety of contexts.

Enter, ITEST. By exposing children to STEM careers from a young age, the NSF sponsored initiative seeks to not only normalize STEM careers, but also to help the next generation of workers begin thinking in ways that will be necessary in the modern workforce (e.g., data-driven). Since 2003, ITEST has sponsored over 300 projects, working with over 560,000 students and roughly 17,000 educators in the United States. Students learn scientific and technology-based content while being exposed to careers in STEM fields. Through this program, teachers are also given an opportunity to develop their own skill sets to better help prepare their students for the modern workforce. A multitude of scientific papers (e.g., Blustein et al, 2012Christensen et al., 2014) have been published examining the efficacy of ITEST, all of which point toward its success.

Arguably the main takeaway of the NSF ITEST program and Malyn-Smith and colleagues’ paper is to promote early STEM learning, particularly in the face of the current changes in the workplace, so as to best prepare the next generation of workers for what lies ahead.

After Automation: Will There Be Enough Jobs?

By: Keaton Fletcher

Will your job be replaced by a robot? A report by McKinsey Global Institute suggests probably not.

Most people in the workforce today have, like those before them, wondered whether they will be replaced by new technology. The concern is so great, that Time Magazine released an article and an associated widget that tells you the odds of your job becoming automated. In a recent podcast, Peter Grumbel interviewed James Manyika, a Senior Partner at McKinsey & Company, about this very subject.

James Manyika suggests that previously, new technology, rather than replacing workers entirely often simply provided workers with additional support, or removed the need to complete mundane, routinized tasks. He suggests, however, that this may not necessarily be the case with the modern rise of machine learning and other methods of capitalizing on technology at work. Data suggest that computing power may have increased by roughly a 1 trillion-fold since 1956. The ability for modern computers, networks of computers, and cloud computing to rapidly complete complex calculations has added to the growing concern and excitement about what the next few years have in store for the modern workforce. Manyika suggests it is not just the computing power, but the accessibility of data. More than ever before, humans around the world are uploading unique data at unfathomable rates. For example, an estimate in 2012 suggested people upload about 250 million photos to Facebook per day. A similar report in 2014 estimated people upload about 1.8 billion photos per day across all social media platforms. The algorithms behind machine learning and artificial intelligence can use this surge in data points to help become more accurate.

Manyika argues that the unprecedented power and access to data will allow algorithms and AI to boost economic performance for companies and countries. But, what about workers? Based on their research, Manyika suggests that if your job consists primarily of data collection or processing, or if you are completing manual labor in a predictable and stable environment, your job will likely be automated. These three tasks prone to automation make up about 51% of economic activity. The good news, however, is that a mere 5% of occupations are primarily these tasks. A majority of jobs (60%) are a mixture of roughly 1/3 these tasks, 2/3 less easily automated tasks.

What does this mean for workers? Manyika suggests that most people will experience some major change in their jobs, but probably will not have to change occupations entirely. Future-oriented CEOs and organizations have already begun the process of training and developing their employees to be better able to adjust to the coming shifts in their workload and demands. Manyika also suggests that occupations in the care industry, or jobs that require empathy and judgment will be more common and more resistant to automation. By 2030, Manyika predicts about 16% of jobs, globally, will have been automated to some extent (this number rises in more industrialized countries), meaning that big change is coming for a large portion of the population.

So, are robots going to take your job? Probably not, but they will almost certainly change it.