Kieran Snyder on CNN

« previous post | next post »

The website for textio.com, now just about one year old, is worth a look.

Kieran's LLOG guest posts:

"Men interrupt more than women", 7/14/2015
"Want to get ahead as a woman in tech?  Learn to interrupt.", 7/17/2014

And a test of those ideas on a dataset of meeting transcripts:

"More on speech overlaps in meetings", 7/16/2014

For those of you near Philadelphia, Kieran will be giving a talk on December 3 in the Penn CIS department's fall lecture series.

Abstract:
How do you measure unconscious bias?

This talk will feature Kieran Snyder of Textio who will walk through a variety of quantitative, statistical research on gender, language, and technology, including:

How often men and women interrupt each other in technology workplace settings

Gendered language in performance reviews and job listings

Systematic differences in how men and women with similar backgrounds choose to present themselves in resumes

We'll also look at the powerful and increasing role that software plays in addressing unconscious bias in the workplace, with a specific focus on machine learning solutions that show measurable results. We'll take a particular look at how Textio is using machine learning and natural language processing to address unconscious bias in job listings, and briefly look at software that covers other aspects of employee development and gender in technology.

Bio:
Kieran Snyder is the Co-Founder and CEO of Textio, a machine learning company that provides text analytics on job listings, resumes, performance reviews, and other documents about people. She holds a PhD in linguistics from the University of Pennsylvania and has previously held product and engineering leadership roles at Microsoft and Amazon. Her work on language, technology, and document bias has appeared in Fortune, Re/code, The Washington Post, Slate, VentureBeat, and CNN.



2 Comments

  1. Met Feddis said,

    November 8, 2015 @ 7:15 am

    Interesting software, although it seems awfully simplistic on the face of it to say, "These phrases will put women off, and these will put men off." I hope there is sufficient data behind these categorizations.

    [(myl) My understanding is that the analysis is based on correlational studies of very large datasets covering job postings and the responses to them, from sources like monster.com. I'm not sure how strongly they can eliminate the hypothesis that the arrow of causation points in a somewhat different direction — perhaps people whose companies attract more female applicants, for other reasons, tend to write job listings differently from people whose companies attract fewer females. But this type of inferential problem exists in (say) most studies of the health effects of diet and exercise.]

    I also hope that this kind of software will be used with the thought in mind that female dominance in fields like teaching, nursing, administrative assistant jobs, childcare, etc, is just as much a situation that society is choosing to allow to exist, and may have just as many negative social consequences, as the exclusion of women from boardrooms and tech jobs. The ideal should be to create a less gender-polarized society overall, not simply to masculinize women by encouraging them to adopt traditionally masculine work roles. Teaching is no more a woman's job than engineering is a man's job, and we have to address both ends of that imbalance to level the playing field.

    Also, not being a professional linguist and correspondingly conversant with the literature, I dearly hope that someone is working on linguistic racial and ethnic bias in job ads and performance reviews, etc. There is so much work to be done in creating a sense of responsibility in the corporate world to actively seek to make workplaces look like the communities in which they are embedded, not simply to fall back on the excuse that there aren't enough qualified candidates of a given race or ethnicity. Surely, crafting job listings that appeal to all sorts of people could be part of that effort.

  2. Kieran Snyder said,

    November 10, 2015 @ 6:58 am

    Thank you! I'm really looking forward to being back at Penn in a few weeks.

    Met, these are good questions. Our tagged data set (of job listings + information about how they performed in the real world) is very large and growing every week, and when we evaluate bias tone of a listing, we are doing so against that large data set (by looking at how similar listings have performed in the real world). While it's true that some fields or industries are demographically skewed to begin with, the data set is big enough to deal with this; not all nursing jobs (or engineering jobs for that matter) attract the same proportions of men and women even in the context of industries that are heavily skewed.

    We look at more than bias – we also model dimensions like time to fill a role or popularity/quality of applicants. But bias is an important dimension that contributes to those other metrics too.

RSS feed for comments on this post