These new topics have been more about some fundamental design decisions in Android, and why they are wrong. I’d like to help people better understand (and judge) these discussions by giving some real background on why Android’s UI was designed the way it is and how it actually works.
One issue that has been raised is that Android doesn’t use thread priorities to reduce how much background work interrupts the user interface. This is outright wrong. It actually uses a number of priorities, which you can even find defined right here http://developer.android.com/reference/android/os/
The most important of these are the background and default priorities. User interface threads normally run at the default priority; background threads run in the background priority. Application processes that are in the background have all of their threads forced to the background priority.
Android’s background priority is actually pretty interesting. It uses a Linux facility called cgroups to put all background threads into a special scheduling group which, all together, can’t use more than 10% of the CPU. That is, if you have 10 processes in the background all trying to run at the same time, when combined they can't take away more than 10% of the time needed by foreground threads. This is enough to allow background threads to make some forward progress, without having enough of an impact on the foreground threads to be generally visible to the user.
(You may have noticed that a “foreground†priority is also defined. This is not used in current Android; it was in the original implementation, but we found that the Linux scheduler does not give enough preference to threads based on pure priority, so switched to cgroups in Android 1.6.)
I have also seen a number of claims that the basic Android design is fundamentally flawed and archaic because it doesn’t use a rendering thread like iOS. There are certainly some advantages to how iOS work, but this view is too focused on one specific detail to be useful, and glosses over actual similarities in how they behave.
Android had a number of very different original design goals than iOS did. A key goal of Android was to provide an open application platform, using application sandboxes to create a much more secure environment that doesn’t rely on a central authority to verify that applications do what they claim. To achieve this, it uses Linux process isolation and user IDs to prevent each application from being able to access the system or other application in ways that are not controlled and secure.
This is very different from iOS’s original design constraints, which remember didn’t allow any third party applications at all.
An important part of achieving this security is having a way for individual UI elements to share the screen in a secure way. This is why there are windows on Android. The status bar and its notification shade are windows owned and drawn by the system. These are separate from the application’s window, so the application can not touch anything about the status bar, such as to scrape the text of SMS messages as they are displayed there. Likewise the soft keyboard is a separate window, owned by a separate application, and it and the application can only interact with each other through a well defined and controlled interface. (This is also why Android can safely support third party input methods.)
Another objective of Android was to allow close collaboration between applications, so that for example it is easy to implement a share API that launches a part of another application integrated with the original application’s flow. As part of this, Android applications traditionally are split into pieces (called “Activitiesâ€) that handle a single specific part of the UI of the application. For example, the contacts lists is one activity, the details of a contact is another, and editing a contact is a third. Moving between those parts of the contacts UI means switching between these activities, and each of these activities is its own separate window.
Now we can see something interesting: in almost all of the places in the original Android UI where you see animations, you are actually seeing windows animate. Launching Contacts is an animation of the home screen window and the contacts list window. Tapping on a contact to see its details is an animation of the contacts list window and the contacts details window. Displaying the soft keyboard is an animation of the keyboard window. Showing the dialog where you pick an app to share with is an animation of a window displaying that dialog.
When you see a window on screen, what you are seeing is actually something called a “surfaceâ€. This is a separate piece of shared memory that the window draws its UI in, and is composited with the other windows to the screen by a separate system service (in a separate thread, running at a higher than normal priority) called the “surface flinger.†Does this sound familiar? In fact this is very much like what iOS is doing with its views being composited by a separate thread, just at a less fine-grained but significantly more secure level. (And this window composition has been hardware accelerated in Android from the beginning.)
The other main interesting interaction in the UI is tracking your finger -- scrolling and flinging a list, swiping a gallery, etc. These interactions involve updating the contents inside of a window, so require re-rendering that window for each movement. However, being able to do this rendering off the main thread probably doesn’t gain you much. These are not simple “move this part of the UI from X to Y, and maybe tell me when you are done†animations -- each movement is based on events received about the finger on the screen, which need to be processed by the application on its main thread.
That said, being able to avoid redrawing all of the contents of the parts of the UI that are moving can help performance. And this is also a technique that Android has employed since before 1.0; UI elements like a ListView that want to scroll their content can call http://developer.android.com/reference/android/vie
Traditionally on Android, views only have their drawing cache enabled as a transient state, such as while scrolling or tracking a finger. This is because they introduce a fair amount more overhead: extra memory for the bitmap (which can easily total to multiple times larger than the actual frame buffer if there are a number of visual layers), and when the contents inside of a cached view need to be redrawn it is more expensive because there is an additional step required to draw the cached bitmap back to the window.
So, all those things considered, in Android 1.0 having each view drawn into a texture and those textures composited to the window in another thread is just not that much of a gain, with a lot of cost. The cost is also in engineering time -- our time was better spent working on other things like a layout-based view hierarchy (to provide flexibility in adjusting for different screen sizes) and “remote views†for notifications and widgets, which have significantly benefited the platform as it develops.
In fact it was just not feasible to implement hardware accelerated drawing inside windows until recently. Because Android is designed around having multiple windows on the screen, to have the drawing inside each window be hardware accelerated means requiring that the GPU and driver support multiple active GL contexts in different processes running at the same time. The hardware at that time just didn’t support this, even ignoring the additional memory needed for it that was not available. Even today we are in the early stages of this -- most mobile GPUs still have fairly expensive GL context switching.
I hope this helps people better understand how Android works. And just to be clear again from my last point -- I am not writing this to make excuses for whatever things people don’t like about Android, I just get tired of seeing people write egregiously wrong explanations about how Android works and worse present themselves as authorities on the topic.
There are of course many things that can be improved in Android today, just as there are many things that have been improved since 1.0. As other more pressing issues are addressed, and hardware capabilities improve and change, we continue to push the platform forward and make it better.
One final thought. I saw an interesting comment from Brent Royal-Gordon on what developers sometimes need to do to achieve 60fps scrolling in iOS lists: “Getting it up to sixty is more difficult—you may have to simplify the cell's view hierarchy, or delay adding some of the content, or remove text formatting that would otherwise require a more expensive text rendering API, or even rip the subviews out of the cell altogether and draw everything by hand.â€
I am no expert on iOS, so I’ll take that as as true. These are the exact same recommendations that we have given to Android’s app developers, and based on this statement I don't see any indication that there is something intrinsically flawed about Android in making lists scroll at 60fps, any more than there is in iOS.
Source:https://plus.google.com/u/0/105051985738280261832/posts