Monthly Archives: April 2010

iPhone app orientation (portrait vs landscape)

Introduction

The UIInterfaceOrientation property that can be put in an iPhone app’s Info.plist file, has lead me into a confusing environment of iPhone app orientation design.
At the top level, there are two ways the orientation can be considered: 1- the physical orientation of the device; 2- the orientation of the on-screen user interface.
And when developing an app, these two things can be inconsistent, and confusing.
The iPhone has 7 different physical DEVICE orientations, 3 of which I am not covering (face-up, face-down, unknown).  The 4 device orientations that are considered in this writing are: landscape-left (homebutton is on right), landscape-right (home button is on left), portrait (home button is down), portrait-upside-down (home button is up).
The iPhone has 4 different interface orientations. Two are portrait and portrait-upside-down that equal the physical device portrait orientations.  The other two are the landscape-right and landscape-left, but these are the opposite: interface-landscape-right = device-landscape-left,  and the other is flipped too.  If the device is rotated one way, the interface rotates the other way, to maintain a consistent “up” direction.

Setup

The specifics of my app:
I have a subclassed UIViewController, and its main view property is assigned a subclassed UIView object.
The main view is added to the UIApplication’s window.
Standard setup so far.
The UIView subclass has an overrided – (void) layoutSubviews method.
The UIViewController subclass has a method registered by the app delegate to receive notifications on orientation change, through the default notification centre.

Testing

There is something inconsistent between the notification centre for orientation change, and the value of [UIApplication sharedApplication].statusBarOrientation value.
Now to test:
For a start, the Info.plist file has the UIInterfaceOrientation key set to UIInterfaceOrientationLandscapeRight.
layoutSubviews will always fire first, and a check during the method for [UIApplication sharedApplication].statusBarOrientation shows landscape-right, as defined in the plist.
Then, sometimes-or-not, the orientation-change notification is sent to the registered object.
Starting the app while the device is physically in portrait, NO orientation change notification gets sent.
However, starting while the device is in any other physical orientation, then an orientation change notification is sent.
So it seems there is an expected default physical Portrait orientation, which cannot be manually defined, yet the default interface orientation can be pre-defined using [UIApplication sharedApplication].statusBarOrientation.
There are more haywire behaviours beyond this, if the device is then physically re-oriented after the app starts, more confusing behaviours.

Conclusions

Today was spent working on front-end app menu screens. My project originally started with the style and layout of screens drawn on paper in landscape orientation.  But, before today, I did not know it’s possible to use landscape-orientation images in Interface Builder nib files.  Now I know, and they can be displayed properly, if the right combination of settings are applied, such as the “Simulated User Interface Element-Orientation” setting in the inspector of IB, set to landscape, and interface orientation in the app.
Before today, I’ve done all my customized orientation positioning and rotation using the views’ center, frame, bounds and transform properties.
So, now, I can possibly adopt the built-in auto resize/rotation system.  Even if I don’t use that resize/rotation system, it could also make positioning easier.  It’s not difficult (now), to calculate the horizontal and vertical center of a shape, and position it within another shape whose width and height keeps changing.  Still, an easier way is always a good thing!
Working with all this is leading me to the following conclusions:
  • An app that has some parts that work only in landscape, and other parts that work only in portrait, should be upgraded so all parts work in all orientations.
  • An app that supports either (aka every) orientation, should not have the UIDeviceOrientation key set, because it leads to confusing situations (unless someone can suggest a good reason for it).
  • But, an app that supports any orientation … should, as a standard, start with the Default image designed to be portrait-oriented, not landscape, like so many iPhone games are that I’ve bought.
  • This final concluding point, personally speaking, is the the biggest problem to be solved: the conflicting desires or requirements: the desire to be universal and support either/every orientation (oh the bad memories of the browser wars come flying back), and the desire also prioritize one orientation with the other less-prioritized.

That’s all, folks!

Terrible conflict, but I am happy I could articulate this whole mess!
Posted in Technology, Thoughts | Tagged , , , , , , | Leave a comment

Game Optimization Adventures, part 1

Starting on Wednesday (today is Friday) I began the process of optimizing the iPhone game app.

Using Instruments, I could see a total Live Bytes value of around 40 MB for an average while the game was running.  Now, I’ve done a couple tasks and also introduced a PVR texture (not got it working yet).  Suddenly, the live bytes (after everything has loaded) has been reduced not to 20 or 30, but 4.  When the app is started, it loads up about 15-20 MB of memory, and then dumps it all except 4.  I’ll mention more about this later on.

Now, that’s a colossal drop in memory use.  From peaking at 40 to down to 15-20, and then now a mysterious (but very happy) drop after loading to around 4.

In the last 4 hours (and some hours yesterday) I’ve been making pages of notes on values and changes from different configurations of the app.  For the configurations, one thing I’ve done is I have built 4 separate Instruments save files using the Xcode Run>Run with Performance Tool>Object Allocations automated setup.  The four setups use these configurations: using JPG texture and stop recording when at peak live bytes, using JPG texture and stop recording after drop to 4 MB, using PVR texture and stop recording when at peak live bytes, using PVR and stop recording after drop to 4 MB.

These files have let me discover the strange, inexplicable fact of the moment (and which I have not built any conclusion about):  the memory being allocated internally by glTexImage2D for my JPG textures is being released.

One reason I dropped initially from 40 MB to a moderately lower amount is I removed an unused texture that was hogging 12 MB by itself, and wasn’t even being used.  My bad.  Actually it wasn’t my fault. haha- blame someone else.  It was from the 3rd texture channel in blender, and the game engine I use claims to only use two texture channels from Blender, because OpenGL ES only supports two texture channels.  Fine.  So that’s why the documentation for the game engine recommends using texture channels outside the first two.  That I did.  But apparently it exports the rest anyway despite them being beyond the first and second channels.  So I removed that texture (2048×2048 pixels x 3 bytes/pixel = 12 MB).  Also I have removed a handful of other textures that I used before and now I have reduced the peak load size from 40 to 15-20.  The 5 MB difference there is the size of the JPG vs the PVR.

So assuming I’m still using a JPG texture

The peak live bytes is 20.30 MB, and the drop-live bytes is 4.67 MB.  Watching the ObjectAlloc graph in the upper pane of Instruments, it will be at the peak for a second, and then it suddenly nose-dives to the low drop.

Now to examine the useJPG-peakLiveBytes instruments file: sort the Live Bytes column in Summary View from high-to-low, and a person can see what Malloc categories are taking up the most space.  And in my case, the biggest ones  are  Malloc 4.00 MB ( # Living = 2 ), Malloc 1.00 MB (# Living = 5 ).   Beyond that, there is Malloc 2.66 MB (this is not a texture, it is part of the Bullet physics engine so I ignore it), and Malloc 256.00 KB ( # living = 4 ) Add up only the 2×4 + 5×1 + 4×0.25 and there’s 14 MB out of 20 MB right there.  There are dozens or hundreds more memory allocations but they become very small and they are not the significance that these large image textures are.

Examining the useJPG-dropLiveBytes Instruments file in the same manner, the biggest live bytes categories are Malloc 2.66 MB (the previously-mentioned physics stuff) … and nothing of the Malloc 4.00 MB and Malloc 1.00 MB.  So I clicked the Category column header to sort by category, and scrolled down to Malloc 4.00 MB and see the # living = 0, and Malloc 1.00 MB # living = 0, and Malloc 256.00 KB # living = 0.

That was predictable.

But the chart still shows a peak and a nose-dive, and so I asked myself “why does this build up and then nose-dive, and when I started this on Wednesday, it was building up to 40 MB and just staying there?”

Now I know why the peak isn’t so high, but why does it fall?  Instruments to the rescue!

Using the dropLiveBytes file, still sorted by Category, I click the right-pointing arrow-in-circle beside Malloc 4.00 MB.  This shows two rows, and both the rows have a blank cell under the Live column.  If looking at this same Malloc 4.00 category selection in the peakLiveBytes file, the cells would have a little black dot.

Then I clicked on the Extended Detail View button and looked at each of the two rows… in each case the 4 MB were being realloc’ed within glTexImage2D.

Now this is where random clicking and exploring Instruments really helped me.  Clicking the right-pointing arrow-in-circle beside the Object Address values in a row will show the history for that Object Address.  The columns of interest in my case are Event Type, Timestamp, Size, Responsible Caller.  I sort the list of two rows by Timestamp, ascending.

The order of Event type, shows Malloc, and Free. Okay! So what else is in the second row along with Free?  well, obviously the same Address, that’s not important. the Size, shows – 4194304 (which is 4x1024x1024.. = 4.00 MB) and Responsible caller shows glDrawElements.

So I examined the other 4MB and the 1 MB alloc objects in the same manner as above, and discover that they were all being Free’ed in glDrawElements.

So a quick jump over to khronos.org’s glDrawElements page and search the page for “free” came up with no successful information.

And now I’m puzzled… why are these objects being realloc’ed in glTexImage2D, and then freed, but they weren’t being freed before?

I have no idea.  But I know I’ve got to get the PVR texture working, and it’s nice to know that the game should not get dumped by the iPhone OS anymore ( I hope ).

More adventures will follow.  Take care.

Posted in Activities & Adventures, Technology | Leave a comment

Bezier timing

I have just finished a bezier timing algorithm, and I’ll give a small description here.

I will be using the bezier values a linear stepping value in a loop… layman’s terms: a counter that counts up evenly, 0,1,2,3 or 0,2,4,6, etc.

There are methods to do this in the iPhone SDK but I already wrote half of the code, and it wasn’t much effort to complete.  Plus, I looked up the CAMediaTimingFunction class, and it seemed too complicated just for what I wanted (just an in-and-out structure).

So all I did is this: create a 100-element array.

.. and it gets a bit complicated, so here’s the code to build the array:

float t;
int i;
CGPoint bpoints[1000];
CGPoint bypoints[100];
for (i = 0; i < 1000; i++) {
 t = i / 1000.0f;
 bpoints[i].x = (1-t)*(1-t)*(1-t)* 0  + 3*(1-t)*(1-t)*t*0.5 + 3*(1-t)*t*t* 0.5 + t*t*t*1.0;
 bpoints[i].y = (1-t)*(1-t)*(1-t)* 0  + 3*(1-t)*(1-t)*t*0.05 + 3*(1-t)*t*t* 0.95 + t*t*t*1.0;
}
for ( transitionTimeStep =0; transitionTimeStep < 100; transitionTimeStep++) {
 float closest = 1.1;
 int closestI = -1;
 // Find the index in bpoints where .x is the closest to the transitionTimestep percent
 for (int x = 0; x < 1000; x++) {
  if ( fabs( bpoints[x].x - ( transitionTimeStep /100.0f)) < closest) {
   closest = fabs( bpoints[x].x - ( transitionTimeStep /100.0f));
   closestI = x;
  }
 }
 bypoints[transitionTimeStep] = bpoints[closestI].y;
}

The first loop creates a high-resolution set of points on a quadratic bezier curve (see Wikipedia’s page for Bezier curves) where the 4 points are between 0.0 and 1.0.  The points (0,1,2,3) have the following x,y coordinates:

  • 0,0
  • 0.5, 0.05
  • 0.5, 0.95
  • 1.0, 1.0

If a person even multiplied these values by 100 and then used a program like Adobe Illustrator, the curve would look something vaguely like a capitol letter S.  Rather, it’s actually more like a forward-slash /  and little curves at the top and bottom .

So I collect the high-resolution points of the line, and then loop through my time-step array.  For every element in the time-step array, I examine every x-coordinate value of the bpoints array.  If the value is closest to the current time-step index value, then I save the bpoints array y-coordinate value.

Yes, this seems very complicated now, but when I was doing it, it seemed very easy.  I guess sometimes I just get on a roll and create complicated stuff.

Also, I know there are better ways to do this, but I only want a look-up table and when I see it in live operation before typing this post, it works beautifully.

An important note!  The bpoint array is VERY important.  It is not good to simply create an exact-resolution array and fill it with values.  I did that at first, and found that there were too many indexes being dropped, that when the timing animation was run, there was a lot of choppy movement, like if the smooth slope of the curve was suddenly turned into a staircase of jagged edges.

So, creating the higher-resolution array first and then scanning it for the closest x value was the logical solution.  Perhaps I could have done it with a 200 element array, but I didn’t try it.

The process only takes a split-second, and it’s only done once at the beginning of the program.

That’s all, take care everyone.

Posted in Technology | Tagged , , , , , , | Leave a comment