Insider Threat Security
The cluster focuses on risks from rogue or malicious employees accessing sensitive data in tech companies, debating security measures like access controls, logging, auditing, approvals, and vulnerabilities to insider threats.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Who said they're not approved? People get approval to access all sorts of things that they aren't supposed to let out of the company. Or may be obligated to take certain steps to ensure that it's not just on the good graces of the employees to ensure doesn't get out.There's no squaring the circle here. You can not lay obligations on corporations to secure their stuff, but not permit them to have any tools that allow them to actually implement that security, and expect
"everyone"? What are you talking about?Have you ever worked in any sector that has security policies?Even if you haven't, perhaps spend 2 minutes using a search engine?Here is a first page result for you: https://www.tomshardware.com/news/samsung-fab-workers-leak-c...
That's quite scary. I wonder if something like this is possible at Google or Microsoft or Yahoo. Even if multiple people need to approve that kind of access, it must be possible to socially overcome those barriers (via influence, bribery, etc.) if the right actors can be identified. It would be preferable to have control over this from the user-side.
Technically, sure.But, usually a company of this size/scale puts in place restrictions so that accessing privileged abilities like this is extremely difficult and requires authorizations / clearances / permissions from users, etc.Generally big / public tech company employees are not even allowed to LOOK at PII or the data of a specific user name etc. You run all tests on staging / fake populated DBs only, etc.
Sure it's OK if you have a small team and you can trust everyone on that team. But if you are in a big company with thousands of employees, you don't want a 'rogue' employee (or contractor ...ala Snowden) to start accessing and messing with your services without company approval - The fewer employees know the company access keys and passwords, the better.
Could be some rogue employee, could be a legitimate employee under constraints / influence of an organization, etc. The NSA tried to do it with Linux quite a few times.
Different employees in the company have different permissions. If an employee with a lot of access commits a secret, then employees who shouldn't have that much access can take the secret and use it.
Pretty simple: some departments have access but everything gets logged and audited. If you cannot connect a request to a ticket, you'll get questioned. If you abused your access, you'll be fired immediately. Other industries handle it that way (e.g. banks). I know enough people in banking and know that there's no chance they would ever risk looking up my accounts.
Article is paywalled. What's the claim based on?A company preventing employees putting sensitive data into an external unmanaged service does not mean the service isn't accessible internally in a controlled manner.
When I did an internship at a national lab, a lot of the hard rules about security relied on the fact that you had gone though their hiring process and would follow the rules. There were different access levels, for sure, but only like 2 or 3. You might have "had access" but you shouldn't be anywhere you didn't have a good reason for being.Lyft should be checking on this, running audits and whatnot, but they also should be setting good policy and culture to not abuse