As the number of mobile smartphones increases, as several platforms begin to dominate and as users begin to download lots of executable code, they will become targets for attack. Rather than repeat the mistakes of the PC world, why can’t we do things better from a security perspective this time around?
So far, most mobile platforms have a good start. Requiring all third party to run a sandboxed environment (and making ‘jailbreaks’ difficult) is a great start. This is a lot like running users as ‘standard user’ in Windows which I’ve recommended multiple times. However, a smart and financially motivated malware writer will simply target the user data rather than trying to break out and corrupt the main OS. Just like today’s attacks on enterprise PCs, why should a malware writer go for a noisy attack on the mobile OS when you can quietly harvest user-accessible sensitive data or other activate other user-accessible features? For example, user-accessible data such as address books, contact lists, email, etc and user-accessible features such as turning on/off the microphone, camera, and so on.
Restricting the application ecosystem to an application store (as opposed to the widespread nature of software availability on today’s PCs) also helps, but relies on fast removal of malware once it is reported. Call it what you will, this is a form of blacklisting. As AV has shown us, this model isn’t effective enough and malware writers will simply reregister, create another ‘variant’ and repost.
There’s a couple of things we could do. One would be to require developers to show proof of security testing before being allowed to post an application. We require this for procured enterprise software, why not for mobile software? Problem is, there aren’t any standards of proof for this and a smart hacker would simply fake the results or write code that isn’t vulnerable per se, but contains embedded malicious intent (like copying the address book).
We could also require stronger vetting of developers before they are allowed to post applications. I’ve talked about this concept before in the PC world. This doesn’t prevent vulnerable (and potentially malicious) software from being written, but would help prevent the rapid reregistration problem above. However, the application store vendors don’t want to do anything that slows the number of developers and amount of applications in their stores.
It seems to me the best option would be that the application store owner sets a minimum standard for security and backdoor/trojan testing that is independently performed. However, this raises the cost for developers (or for the store owner) and potentially slows down the ‘network effect’ of having the largest application store (which attracts more users, which attracts more developers, repeat)
Seems like this conflict of interest between the network effect of more developers and applications versus improved security won’t be resolved until a significant attack is publicized and users start voting with their dollars.
Category: beyond-anti-virus endpoint-protection-platform general-technology information-security
Tags: application-security application-security-testing-tools endpoint-protection-platform whitelisting
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.