Even without looking at the statistics, Android developers know that the Google Play Market is full of applications, there are already about 3.5 million of them, which makes it clear that the competition is fierce. Therefore, the topic of Android testing is very relevant. And there is a high demand for Android testers, including Junior. Given the competition, developers cannot – in any way and anywhere – cheat users. This can be realistically achieved if the QA department gives its all. If software for testing Android is your weak point, or you are completely new to this topic, let’s try to understand the basics. Let’s start with a simple one, consider mobile testing “in general”. Mobile testing is the process of testing mobile applications, i.e. applications for smartphones, tablets and other mobile devices (you can read more about mobile application testing in our article). Functionality, performance, safety, convenience are checked. Testing can be manual or automated.
The goal of the tester and the entire QA team is to ensure that the application meets the stated business requirements and user expectations. By acquiring new skills in testing, you can raise your importance in the company by taking on more responsibility and finding vulnerabilities in your applications. When the management sees your performance report, they will definitely give you a bonus or revise your salary. Now you are a security specialist, and there is a huge shortage of such in the market, which means that your value as a professional increases. And remember: all of the above is for learning! You can use this information in your projects only with the permission of the copyright owner.
When you do one thing for a long time, it gets boring, and I decided to try to figure out how vulnerability checks are done in mobile applications. The topic was taken from the OWASP TOP 10 list, only for mobile. OWASP has moved, so I can’t drop the link to the official topic. Before the site was moved, the list of vulnerabilities was as follows:
After I opened the list and looked at the top mobile vulnerabilities, I realized that half of them are completely similar to web vulnerabilities, that is, the classic OWASP TOP 10 that we are so used to seeing. Since, in fact, native and web applications have the same way of working – like a client-server architecture. That is, the client is a native application on a mobile phone, and a browser on the Internet, but both requests are sent to the server. This leads to the conclusion that half of the techniques can be taken from web vulnerabilities to apply hole searches in native programs.
Let’s start with what set of tools we need to conduct a basic analysis of the security of the program. Yes, I will add: further on I will tell you how to use it for Android applications. iOS has slightly different specifics, about it in another article.
1. Test environment, i.e. APK. For this, we can take the DIVA program. It collects the most common mobile application vulnerabilities and you can practice finding them.
2. Mobile device. Either raised through the Genymotion emulator, or real, but necessarily rooted, because without root privileges, penetration will not succeed.
3. Santoku Linux. This distribution was created specifically to test Android applications for vulnerabilities. It already has all the necessary hacking programs installed out of the box.
So, let’s start the search with the prevalence statistics of each of the holes of the OWASP top. If we take the statistics of 2018, we will see which categories of vulnerabilities should be paid more attention to when auditing a mobile application. And now, I think, it’s time to move on to the analysis of each of the categories. Let’s start considering them not in order, but with M9 – Reverse Engineering, since the pentest starts with it.
Reverse engineering of mobile code is a common phenomenon. This is a process of simple and unauthorized analysis:
program source code;
To get the source code of the application, you need to download the installation file of the mobile application, i.e. APK, open the console and execute simple commands on Santoku Linux. To get the source code of the application, you need to drop the mobile application installation file, i.e. APK, on Santoku Linux, open the console and execute simple commands. Run the unzip -d diva-beta base.apk command. As you guessed, it will unzip the program and put all the files in a folder that we named diva-beta.
Next, you need to go to this folder and execute the following command in it: d2j -dex2jar classes.dex. With its help, we decompile the code that is in this file. If we open this file without decompilation, we will see only krakogills in it. After executing this command, a new file called classes-dex2jar.jar will appear in the folder, which will contain the normal human-readable source code of the program.
In order to open this file and start studying the program code, we need the Jadx program, which is also installed in our Linux distribution. Run the command jd-gui classes-dex2jar.jar.
After executing this command, the Jadx program will open. We will see the entire source code in it and will be able to understand all its shortcomings, that is, find some vulnerabilities already with its help.
And from the M9-category we will move on to M1. Misuse of operating system functionality or platform security measures falls into this category. This happens frequently and can affect vulnerable applications. Let’s go, for example. Since we already have this program with the source code, we will study one of the activities of this apk with the help of a previous vulnerability.
We can see that the developer used logcat when debugging the program to understand what errors were in this field. But when compiling the program into a release build, I forgot to remove this debugging command. What could this mean for users of the app? That the actions will be logged if there are any errors or warnings. That is, when the user will write in the form (and, let’s say, it is a form for accepting card data), these data will flash in the program logs if the user makes a mistake when filling out the form or receives a warning about filling it out. It is not difficult to guess that an attacker can gain access to these logs.
This risk in the OWASP list informs the developer community about insecure storage of data on a mobile device. An attacker can either gain physical access to a stolen device or log into it using malware.
In the case of physical access to the device, an attacker can easily access the file system of the device after connecting it to a computer. Many freely distributed programs allow an attacker to gain access to directories and the personal data they contain.
confidential data in the application must be stored in encrypted form;
apps can share data with other apps.
As an example, let’s take the registration form in the mobile application. I, as a user, registered in the application. The developer took and put my data in unencrypted form in a public folder that other apps installed on the phone have access to. We get that my data has been merged.
M3 is another common risk that mobile app developers forget about. Data transfer to and from the mobile application is usually carried out via a carrier or Wi-Fi. Attackers have been known to be successful in exposing users’ personal information if this transmission is not secured. Hackers intercept user data on a local network through a compromised Wi-Fi network, connecting to it through routers, cell towers, proxy servers, or using an infected application using malware. When sending requests to the server with data that the user submits, some of them are sometimes sent over HTTP instead of HTTPS.
An example of exploitation of this vulnerability: an attacker creates a compromised Wi-Fi network to which the user will connect. Then this man in the middle begins to analyze all the traffic that will pass through it. Accordingly, user data sent to the server via HTTP may be intercepted. The attacker will see his credentials in the intercepted packet. Below are examples of how to transfer data badly and well. You can also view this traffic interception view. Badly:
Data should be encrypted: even if an attacker starts listening to traffic by connecting to the same network as us, he will at least not see the information in public. That is, we will make it difficult for him to steal data, as shown in the figure below.
Okay, we’ve applied the encryption we talked about in the previous vulnerability section. But if we use weak encryption/decryption processes or allow flaws in the algorithms that run them, then user data becomes vulnerable again.
There are three ways that attackers try to exploit cryptographic problems:
gain physical access to the mobile device;
monitor network traffic;
use malicious programs on the device to access encrypted data.
To understand what methods developers use to encrypt data, we need to look at the source code we already have.
In the picture above, we can see that the developer used the MD5 hashing method, which literally screams: “Break me completely!” This is one of the easiest methods.
When intercepting a request from a user, we will see seemingly classified data. But if we go to some online decoder and throw this hash there, we will see the real password of this user.
Poor authentication schemes allow an attacker to anonymously perform any user-accessible action on the mobile application or the server used by the mobile application. Weak authentication for mobile applications is quite common due to the form factor of mobile device input. Formfactor strongly recommends using short passwords, often based on four-digit PINs. Authentication requirements for mobile applications can differ significantly from traditional web authentication schemes due to accessibility requirements. In traditional web applications, users are expected to connect to the network and authenticate in real-time.
As soon as the attacker realizes how vulnerable the authentication scheme is, he fakes or bypasses the authentication by sending requests to the server to process the mobile application without involving the latter at all. For example: an attacker can use just some program analyzer, let’s say the same Burp Suite. It is enough for him to analyze what the pages of this program are.
Consider the authorization request. We can see the data being sent to the server. Next, the attacker simply tries to get information from the server, using the original information in the request. It goes through the allocated places to achieve a positive result of unauthorized access to the data of one of the users.
pay attention to the siren;
pay attention to the type of user;
try to replace the token by selecting the necessary one (with access to administrative functions), etc.
Many people confuse the M4 risk with the M6 risk as they relate to user credentials. The developer should keep in mind that M6 involves using authorization to log in as a legitimate user, unlike M4 where an attacker tries to bypass the authentication process by logging in as an anonymous user.
Once an attacker gains access to the application as a legitimate user by fooling the application’s security mechanism, his next task in M6 is to gain administrative access by brute force traversal of requests, among which he may come across administrator commands. Attackers typically use botnets or mobile malware to exploit authorization vulnerabilities. The result of this security breach is that an attacker can perform binary attacks on the device in offline mode. The search can also be done using Burp Suite, trying to execute queries available to the admin as a normal user. See M4 Vulnerability.
Risk M7 arises from poor or conflicting coding practices, where each member of the development team follows different coding practices and creates inconsistencies in the final code. The saving for developers here is that even though the prevalence of this risk is common, its detection is low. It is not easy for hackers to learn patterns of bad coding, often requiring some difficult manual analysis. Due to poor coding, the user of the mobile device may experience slow processing of requests and the inability to correctly load the necessary information. For example, you can cite the story of WhatsApp, when its engineers discovered the possibility of overflowing the buffer by sending a specially created series of packets. For this, it was not necessary to answer the call and the attacker could execute arbitrary code.
It turned out that such a vulnerability was used to install spyware on the phone. This service was sold by the Israeli company NSO Group. You should not use functions that can overflow the buffer, like this:
Everything is clear here, and there is nothing to tell. You just shouldn’t download APK apps from third-party resources, as hackers prefer to falsify the code in apps, as this allows them to gain unlimited access:
to other apps on your phone;
to user behavior.
Remember, in M9 we reverse-engineered the program and know the source code? Now we can fix it (put some kind of worm in there that will access data on other applications), then recompile and upload the APK to some site with a note that it can be downloaded here for free 🙂
Before an application is ready to run, the development team often stores the code in it to have easy access to the internal server. This code does not affect the operation of the program. But when an attacker finds these hints, developers will not be very happy. All of a sudden it will be credentials for logging into the system with admin rights?
As an example, let’s take a page where you need to enter a key to access important data. Let’s explore what this page looks like in code.
Let’s see that the developer has left a hint about what needs to be implemented to access the data we need. That’s all for today, see you soon.