What our team is doing to keep LinkedIn safe
When our 610 million members use LinkedIn to find jobs, make connections and learn new skills, they expect and deserve a safe and trusted community where they can express themselves professionally. Being a target of abuse online is painful and it has no place on LinkedIn.
In the past, I’ve written about how members can stay safe including what we’re doing to remove fake profiles and how we remove nation-state activity from LinkedIn. We continue to make progress in these areas, and have a number of new ways that we’re identifying offensive content through human and tech intervention, combined with reporting by our members. Specifically, there are some new and helpful features and investments we’ve made including:
The ability to report something as promoting terrorism or extremely violent. We hope that you don’t encounter this content, but if you do, please report it immediately. You can see this within the reporting experience, by clicking on the three dots at the top right of any post, comment or message.
New technology is helping us detect fake profiles. In the first quarter of 2019, we were able to identify thousands of fake profiles, and we continue strengthening our detection backbone, and removing these profiles on behalf of our members.
We’re committed to continuing to take action on content and profiles that violate our Terms of Service and Professional Community Policies. Keeping this global community safe, secure and thriving is what gets us up in the morning, but we can’t do it alone. With our teams, our technology and members reporting every instance of abuse they see, we can keep LinkedIn the safe, trusted and professional place it needs to be.