Social Networking and hiring Process

Home Forums Miscellaneous Social Networking and hiring Process

This topic contains 3 replies, has 3 voices, and was last updated by  Adriel Hampton 8 years, 1 month ago.

  • Author
  • #112214

    Henry Brown

    Interesting concept

    NOT sure that HR departments have the resources to do this…

    There is some “chatter” that the suitability/clearance investigations might someday incorporate this kind of information…

    From Mike Elgan’s blog

    ‘Pre-crime’ Comes to the HR Dept.
    By Mike Elgan
    September 29, 2010

    In the Steven Spielberg movie Minority Report, police belonging to a special Pre-crime unit arrest people for crimes they would do in the future. It’s science fiction, and it will probably never happen in our lifetimes.

    However, the pre-crime concept is coming very soon to the world of Human Resources (HR) and employee management.

    A Santa Barbara, Calif., startup called Social Intelligence data-mines the social networks to help companies decide if they really want to hire you.

    While background checks, which mainly look for a criminal record, and even credit checks have become more common, Social Intelligence is the first company that I’m aware of that systematically trolls social networks for evidence of bad character.

    Using automation software that slogs through Facebook, Twitter, Flickr, YouTube, LinkedIn, blogs, and “thousands of other sources,” the company develops a report on the “real you” — not the carefully crafted you in your resume. The service is called Social Intelligence Hiring. The company promises a 48-hour turn-around.

    Because it’s illegal to consider race, religion, age, sexual orientation and other factors, the company doesn’t include that information in its reports. Humans review the reports to eliminate false positives. And the company uses only publically shared data — it doesn’t “friend” targets to get private posts, for example.

    The reports feature a visual snapshot of what kind of person you are, evaluating you in categories like “Poor Judgment,” “Gangs,” “Drugs and Drug Lingo” and “Demonstrating Potentially Violent Behavior.” The company mines for rich nuggets of raw sewage in the form of racy photos, unguarded commentary about drugs and alcohol and much more.

    The company also offers a separate Social Intelligence Monitoring service to watch the personal activity of existing employees on an ongoing basis. The service is advertised as a way to enforce company social media policies, but given that criteria are company-defined, it’s not clear whether it’s possible to monitor personal activity.

    The service provides real-time notification alerts, so presumably the moment your old college buddy tags an old photo of you naked, drunk and armed on Facebook, the boss gets a text message with a link.

    Two aspects of this are worth noting. First, company spokespeople emphasize liability. What happens if one of your employees freaks out, comes to work and starts threatening coworkers with a samurai sword? You’ll be held responsible because all of the signs of such behavior were clear for all to see on public Facebook pages. That’s why you should scan every prospective hire and run continued scans on every existing employee.

    In other words, they make the case that now that people use social networks, companies will be expected (by shareholders, etc.) to monitor those services and protect the company from lawsuits, damage to reputation, and other harm. And they’re probably right.

    Second, the company provides reporting that deemphasizes specific actions and emphasizes character. It’s less about “what did the employee do” and more about “what kind of person is this employee?”

    Because, again, the goal isn’t punishment for past behavior but protection of the company from future behavior.

    It’s all about the future.

    The Future of Predicting the Future
    Predicting future behavior, in fact, is something of a growth industry.

    A Cambridge, Mass., company called Recorded Future, which is funded by both Google and the CIA, claims to use its “temporal analytics engine” to predict future events and activities by companies and individual people.

    Like Social Intelligence, Recorded Future uses proprietary software to scan all kinds of public web sites, then use some kind of magic pixie dust to find both invisible logical linkages (as opposed to HTML hyperlinks) that lead to likely outcomes. Plug in your search criteria, and the results come in the form of surprisingly accurate future predictions.

    Recorded Future is only one of many new approaches to predictive analytics expected to emerge over the next year or two. The ability to crunch data to predict future outcomes will be used increasingly to estimate traffic jams, public unrest, and stock performance. But it will also be used to predict the behavior of employees.

    Google revealed last year, for example, that it is developing a search algorithm that can accurately predict which of its employees are most likely to quit. It’s based on a predictive analysis of things like employee reviews and salary histories. They simply turn the software loose on personnel records, then the system spits out a list of the people who are probably going to resign soon. (I’m imagining the results laser-etched on colored wooden balls.)

  • #112220

    Adriel Hampton

    I was first exposed to social networking as a work research tool (I’m a local government investigator). It’s a brave new world. Thanks for sharing this, Henry.

  • #112218


    I was actually going to write my next week’s HR Humans Represent blog on this subject after reading an article on it last Thursday!

    I’ll share the information here:

    Background Checking … Using Social Media
    by Todd Raphael
    Sep 28, 2010, 5:16 pm ET

    Employee referrals and social media have begun to blend together. Could background checks and social media be next?

    A new company called “Social Intelligence” says it’ll “track the worldwide network of social media, including Facebook, Twitter, Flickr, YouTube, LinkedIn, individual blogs, and thousands of other sources.”

    Social Intelligence will, within 24-48 hours, produce a report on a job candidate using both automation as well as humans, the latter there to make sure there aren’t “false positives.” It says it will weed out “protected class” information it finds, such as race and religion. The company is also offering a version to monitor what existing employees are up to.

    As far as the hiring version, a screenshot, which you can click on to enlarge, shows that the employee profile screens for such things as: ”Gangs,” “Drugs/drug lingo,” “demonstrating potentially violent behavior,” and “poor judgment” — something we could all agree can be found in ample supply on social media.

    I asked the company’s CEO, Max Drucker, whether this judgment thing is kind of subjective. “We err on the side of not flagging something,” he says, adding that “serious red-flag issues” are what they’re really looking for. He also notes that the firm has three people review information before the profile’s done. So, “Todd beat Sean in the 600-meter dash” shouldn’t show up as a Todd-beats-people flag. I hope.

    Nick Fishman, the co-founder of EmployeeScreenIQ, doesn’t envision his or other similar companies going down the social-media background-checking road. “Not only are they not now, but I don’t foresee getting into it in the future,” he says. “It’s a hornet’s nest.” Awaiting employers in that nest, he says, are FCRA regulations and EEO rules.

    But Drucker, from Social Intelligence, says that “what we do is protect the employee from discrimination, and protect the employer from allegations of discrimination.” He notes that “if the employer is freaked out by the risks” of background checks and skips them, then they may end up liable for being negligent in the hiring process.

    Robert Pickell, who’s the senior vice president of customer solutions at HireRight, says that he expects to see a lawsuit like that before long: a workplace violence or similar episode will happen, and someone will argue that the employer should have found information on social media indicating that the employee was dangerous.

    HireRight has been talking to customers about the social-media-background-checking convergence for three or four years. The company has yet to plunge into it, though, saying there just isn’t demand, and the pitfalls are too great.

  • #112216

    Henry Brown

    A slightly different “spin

    From Dana Boyd’s blog

    Regulating the Use of Social Media Data

    If you were to walk into my office, I’d have a pretty decent sense of your gender, your age, your race, and other identity markers. My knowledge wouldn’t be perfect, but it would give me plenty of information that I could use to discriminate against you if I felt like it. The law doesn’t prohibit me for “collecting” this information in a job interview nor does it say that discrimination is acceptable if you “shared” this information with me. That’s good news given that faking what’s written on your body is bloody hard. What the law does is regulate how this information can be used by me, the theoretical employer. This doesn’t put an end to all discrimination – plenty of people are discriminated against based on what’s written on their bodies – but it does provide you with legal rights if you think you were discriminated against and it forces the employer to think twice about hiring practices.

    The Internet has made it possible for you to create digital bodies that reflect a whole lot more than your demographics. Your online profiles convey a lot about you, but that content is produced in a context. And, more often than not, that context has nothing to do with employment. This creates an interesting conundrum. Should employers have the right to discriminate against you because of your Facebook profile? One might argue that they should because such a profile reflects your “character” or your priorities or your public presence. Personally, I think that’s just code for discriminating against you because you’re not like me, the theoretical employer.

    Of course, it’s a tough call. Hiring is hard. We’re always looking for better ways to judge someone and goddess knows that an interview plus resume is rarely the best way to assess whether or not there’s a “good fit.” It’s far too tempting to jump on the Internet and try to figure out who someone is based on what we can drudge up online. This might be reasonable if only we were reasonable judges of people’s signaling or remotely good at assessing them in context. Cuz it’s a whole lot harder to assess someone’s professional sensibilities by their social activities if they come from a world different than our own.

    Given this, I was fascinated to learn that the German government is proposing legislation that would put restrictions on what Internet content employers could use when recruiting.

    A decade ago, all of our legal approaches to the Internet focused on what data online companies could collect. This makes sense if you think of the Internet as a broadcast medium. But then along came the mainstreamification of social media and user-generated content. People are sharing content left right and center as part of their daily sociable practices. They’re sharing as if the Internet is a social place, not a professional place. More accurately, they’re sharing in a setting where there’s no clear delineation of social and professional spheres. Since social media became popular, folks have continuously talked about how we need to teach people to not share what might cause them professional consternation. Those warnings haven’t worked. And for good reason. What’s professionally questionable to one may be perfectly appropriate to another. Or the social gain one sees might outweigh the professional risks. Or, more simply, people may just be naive.

    I’m sick of hearing about how the onus should be entirely on the person doing the sharing. There are darn good reasons in which people share information and just because you can dig it up doesn’t mean that it’s ethical to use it. So I’m delighted by the German move, if for no other reason than to highlight that we need to rethink our regulatory approaches. I strongly believe that we need to spend more time talking about how information is being used and less time talking about how stupid people are for sharing it in the first place.

You must be logged in to reply to this topic.