/var/tmp
   


About
Android, Linux, FLOSS etc.


Code
My code

Subscribe
Subscribe to a syndicated RSS feed of my blog.

       

Mon, 13 Nov 2023

PineTime

I wanted a smartwatch to see my heart beats per minute, number of steps walked and so forth. I also wanted one where the software and complementary apps were or could be free and open source. A smartwatch running WearOS (or alternatively AsteroidOS) would fit that bill. It also would run a variant of the Android operating system, which I am familiar with. Another alternative is the PineTime smartwatch, which is really geared to a free and open source audience, and which was only $27 plus shipping. I could have bought a dev kit which costs about the same amount, but at this point did not. Theoretically I could write my own OS for the smartwatch if I had the dev kit, but for now I am running the default InfiniTime OS for the watch.

I ordered the watch two weeks ago and it was just shipped to me Friday. It is running InfiniTime 1.11.0 from October 16, 2022. It does not yet run InfiniTime 1.13.0, released June 24 of this year with an improvement in heart rate processing. InfiniTime is GPLv3, and I was reading through the 1.13.0 heart rate improvement and that code is written in C++.

The watch came with a charger, so I charged it via the USB plug and dock. It was suggested that Gadgetbridge, Siglo and WatchMate would interface with the watch, so I put Gadgetbridge on my Android phone, and Siglo and WatchMate on my Linux.

The watch was sending battery rates out to my Android and Linux box via Bluetooth. It also sent number of steps made. It was not sending heart rate. However when I went into the watch and asked for heart rate to be measured - it began measuring it.

I wanted to get an idea of my resting heart rate, my heart rate when moving around a little, and my heart rate when I go to the gym and get on the treadmill. The BPM floats around a bit, with some peaks when resting and valleys when moving quickly on the treadmill, so I even those out in my head as probably error. On the whole, increased activity tends to increase average heart rate. On the treadmill, I now have a better idea of what incline and speed is 60% of maximum BPM, 65%, 70% etc. This was my first time with the watch, I will probably keep pushing a little and seeing what some higher rates are, but I am not currently planning on doing a lot of HIIT training in the near future. Just enough to keep me healthy and maybe lose a few pounds.

After having F-droid download Gadgetbridge and pairing it with my app, I have been sending battery, step, and heart information to the app. I have also been sending it to Siglo and Watchmate on my Linux. I have not used Flatpak with Linux a lot, but those two apps I got and used with Flatpak. They are OK, although Gadgetbridge has been more of what I am looking for out of the box.

This gives me a better idea of how much cardiovascular exercise effort I should be making at the gym, and if I use it along with an exercise regiment, should keep me healthier, as I know better how far I can push my exercise. The watch and related apps are open-source, so I have more control over all of this information, and can make changes to the watch and auxiliary applications if I want to.

[/smartwatch] permanent link

Thu, 16 Mar 2023

LLaMA

I have been using GPT-3, and GPT-4 just came out. Finding this interesting, I downloaded LLaMA. I have LLaMA on my desktop. There is a gradio, Automatic1111-like text generation web UI that lets large language models be targets. Normal LLaMA does not work for my desktop's Nvidia RTX 3060 with 12G VRAM, so I got 4 bit models, which may be substandard, but what can I do? LLaMA comes in different parameter sizes: 7B, 13B, 30B and 65B. My Nvidia card can handle 13B parameters, but my Nvidia card conks out when trying to do the 30B parameter one. The model is 25 gigs large. Reminds me of how Marvin Minsky used to say that when AI is really figured out, it will be able to be run on a typical home PC.

So with the text generation web UI I am now talking with LLaMA, which is more interesting than talking to Eliza.

Common sense questions and answers
Question: How many legs does a dog have?
Factual answer: 4
Question: What does the German word "unbehagen" mean in English?
Factual answer: It means discomfort, unease.
Question: Who wrote a letter to the Corinthians circa 53 AD?
Factual answer: Paul of Tarsus.
Question: What is 22+33?
Factual answer: 55
Question: What is the astrological sign of someone born March 16?
Factual answer: Pisces.
Question: Who designed the movie poster for the movie "Vertigo"?
Factual answer: The designer was Bill Gold.

OK, finally got him. Saul Bass designed the movie poster for Vertigo (Bill Gold did design the posters for other Hitchcock movies, like "The Wrong Man" though). What does ChatGPT say (I am pointed to GPT-3, not paying for GPT-4 currently).

Who designed the movie poster for the movie "Vertigo"?
The iconic movie poster for the film "Vertigo" was designed by the renowned graphic designer Saul Bass. Bass was known for his innovative and memorable movie posters, as well as his work in title sequences and corporate logos. The poster for "Vertigo" features a stylized image of a woman's face and swirling, vertigo-inducing shapes, which perfectly captures the mood and themes of the film. The poster is considered a classic of graphic design and is still widely recognized and admired today.

Correct! ChatGPT wins my no-prize prize for that question. Of course, ChatGPT has 175B parameters to the 13B 4-bit LLaMA that I am running on my desktop.

[/ai] permanent link

Thu, 13 Oct 2022

Stable Diffusion

On August 31st, Stable Diffusion was released. It is a computer program where you tell it what image to create and it creates it. So you say "tree" and it will give you a picture of a tree. You say "oak tree" and it will give you a picture of an oak tree. You say "oil painting of an oak tree" and you will get an oil painting of an oak tree. You say "lithograph of an oak tree" and you will get a lithograph of an oak tree. You can say "lithograph of an oak tree and a deer" and get this -

You can ask for it to give you some alternatives, in which case you can also get another try at it like

It is pretty simple to use - you tell it what image you want to see and it will attempt to give you that image.

It can do other things as well - if you like part of a picture it draws, but not another part, you can erase the part you don't like ("inpainting") and tell it to try something else. If you like a picture but want to see the surrounding envisioned scenery, you can ask for that as well ("outpainting"). If you give a text prompt and ask for some alternative pictures, and like one particular one, you can ask to make more pictures to look like that one. You can also do all of these things - inpainting, outpainting, image-to-image - with already existing pictures it did not create.

Stable Diffusion was released for free, and the source of the software is available (as are the pre-trained weights of its model). I am running it on my desktop at home. It can be run on a server on the Internet as well (hopefully one that has one or more GPUs).

Programs like this have been developed by other places in the past two years - DALL-E by OpenAI, Imagen by Google, Midjourney. There has been some limited access to them as well, although sometimes only by invite. Stable Diffusion just dropped this so that everyone can use it, and I am sure it will speed the adoption of deep learning.

AI

To go a little back into the history of all of this - in 1956 John McCarthy, Marvin Minsky, Claude Shannon and Nathaniel Rochester held a workshop on "artificial intelligence" for a study - "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Around this time, two schools of thought arose on how to do artificial intelligence - symbolic AI and connectionist AI. One idea in symbolic AI is that there are facts, and with a set of facts, one can use logic to come to conclusions. Symbolic AI is fairly understandable - you can understand and explain how a computer reasoned about something.

The other school of thought in AI was connectionism. The idea in connectionism was to model the computer on the human brain - our brain is neurons which are connected, and connectionist AI created artifical neurons and wired them together. Connectionism has a focus on learning - data is sent through the neural network, and can be "remembered" later. Stable Diffusion is more in the connectionist school, which is the machine learning, deep learning school.

The connectionist school had some successes in the early days, but the 1969 book Perceptrons pointed out flaws in some things being done back then, which may have caused interest in connectionism to wane somewhat, and the success of some GOFAI type expert systems in the 1980s also favored that school over connectionism. However for the past ten years the connectionist deep learning school has seen a resurgence, and is in use at most of the big tech companies, and Stable Diffusion is part of this trend.

Stable Diffusion

Stable Diffusion uses a model to do what it does, and it cost about half a million dollars to train this model. The model is 4 gigs large on my computer, and it's amazing what can I can generate on my home computer with that 4 gig model. It's truly mind-blowing, and gives some insight into how the brain works and what computers will be able to do in the future. Stable Diffusion generates a lot of great stuff, but has trouble trying to generate some things (like a sign that says "WARNING" or something like that, or human faces in some cases). As time goes on, it an other systems will just get better.

You can download Stable Diffusion to your computer, but I know what I'm doing and had some difficulty getting it working. People are at work to make it simpler to do. You do need a decent graphics card. There are also forks of Stable Diffusion which may be simpler to install, or offer more features. If you think you can handle it, you can get Stable Diffusion here.

Probably easier is to just go to a website that takes text prompts and generates images, like Stable Diffusion web Dezgo Night cafe

[/ai] permanent link

Tue, 16 Nov 2021

Android - compiling and sideloading it onto an Android device

In 2008, Google came out with a phone with a free and open source operating system, which ran on the showcase G1 phone. I soon began to learn how to program applications for Android.

That the operating system was free and open source, and that I could theoretically compile the Android operating system on my computer and sideload it onto a phone was a very exciting prospect for me. Nonetheless, due to lack of time, as well as various roadblocks (including ignorance about how to do it), I never actually sideloaded Android onto an Android device, I always used the Android OS installed by the manufacturer.

In July of last year I took another stab at it. Over three weekends I dove into the subject. I learned a little about boot unlocking, fastboot, and LineageOS. I bought a Samsung tablet, primarily so as to sideload onto it as there seemed to be a bootable image of it, and I could buy it at Best Buy. But then I learned that Samsung did not use Android fastboot like most Android devices, it used something called Odin. After three weekends I became busy with other things and put the whole thing off.

Work entered a slack period and I had some time off, so I decided to make another go at it. Android's source code is made available via the Android Open Source Project (AOSP), but they don't provide a lot of support to non-vendors, and compiling straight off of Google is daunting for someone without much experience with this. There are third-party projects, based mostly on Android, which help people load an Android-like system on their Android devices. They provide a little more hand-holding, and support for random people compiling Android source code and sideloading it onto devices. LineageOS is one of the most popular of these, so I decided to use LineageOS.

Last year I got stuck with an Android device I couldn't sideload to, so this time around, I wanted to make sure I got one that I could sideload onto. I wanted one officially supported by LineageOS, not one I would have to futz with to install with LineageOS. I also wanted one I could buy at Best Buy or the like. Having both of these conditions as true is difficult, as it takes a while for an Android device to be officially supported by LineageOS, yet Best Buy tends to sell newer Android devices. By the time a device is officially supported by LineageOS, Best Buy is often not selling it any more.

As Samsung did not use the standard fastboot, Samsung was off my list. Also, I wanted a device officially supported by LineageOS, but that was still being manufactured. With that in mind, there were very few tablets that could be used. The ones theoretically available looked a little difficult to get a hold of in the time I had. Most of what was available were phones. I would have preferred a tablet at this point, but phones were most of what was available, so a phone it would be.

Most of what was non-Samsung, and that had LineageOS official support, and which I could pick up at nearby Best Buy's were Motorola phones. Including a $149 Moto G Play at a Best Buy outlet. I drove there and was there when it opened on a Saturday. I asked a woman for the Moto G Play. She could not find it in the rack. I then asked for two other Motorola discount phones. She could not find them either, nor in the back. The next best thing was a discount Motorola Edge 5G phone at another Best Buy. I drove there and they had it and I bought it. Box price was $700 but I got a couple of hundred off.

So I turned on the device, then turned on developer options. Then I allowed USB debugging and OEM unlocking for the phone.

I am using a desktop running Ubuntu 21.10. I add myself to the plugdev group on my Ubuntu 21.10 desktop and add some udev rules. I get a response for "fastboot devices", and "getvar version" works when the phone is in the recovery mode. Now I want to unlock the phone. I try on my Ubuntu 21.10 desktop to do "fastboot oem get_unlock_data" but it does not go. So I boot up a Macbook and I follow the instructions to do it on the Macbook and I get the unlock data. I then send the data to Motorola on the web and they send a key back when I agree to void the warranty. So I enter the key and now the phone is unlocked.

Speed is the key here, and unlocking works on the Macbook but not the desktop, so I just proceeded with it on the Macbook instead of investigating. I probably won't unlock this device again, but if I unlock another one I can look into it more. I now turn off the Macbook and boot up my laptop running Ubuntu 21.04 and work off of it. I may have been able to have gotten the oem data and unlocked the phone from my System76 Ubuntu 21.04 laptop, but I did not try.

So on my System76 Ubuntu 21.04 laptop I download the LineageOS Edge (racer) recovery image, copy partitions zip, and the latest LineageOS nightly build for racer (Moto Edge). I fastboot flash the recovery image onto the Edge. I turn the phone on and off and - I see the new recovery screen. I then sideload the copy partitions zip. I then sideload the LineageOS nightly build. I go back to reboot system on the recovery screen. LineageOS boots! I set it up. Cool. Wifi works. Camera works. I turned USB debugging on in settings and shell in on ADB. It works. Cool!

OK next step - I want to compile LineageOS myself. I install the suggested packages on my Ubuntu 21.10 (similar to AOSP suggested packages). I follow the instructions, then do a "repo sync" which says it will take a while - and it takes a few hours.

So the repo synced. Now I specify I am building for the Moto Edge (racer). In terms of doing this simply and step-by-step, I probably missed an overall step here, and that would be to compile LineageOS for a standard AVD/emulator, without worries of firmware, unlocking and so forth. I could have seen if I could compile the Android OS for an Android emulator, and then dealt with specific hardware. I could have done that before buying the Motorola Edge.

So my build fails. Ubuntu 21.10 was released fairly recently, and it uses glibc 2.34, which this racer/Edge branch of LineageOS (and AOSP?) may have trouble with.

So I start the build process over on my laptop running Ubuntu 21.04. The Edge (river) had payload based OTAs in its A/B partitioning system, and I extract them. So i keep going and on my laptop - LineageOS does compile. My build of the recovery image seems to have problems, so I flash the stock one I downloaded. But the LineageOS image I built on my laptop and sideloaded does go on successfully. I can even go to my settings build number and see the build. So I built my own Android OS and sideloaded it onto the phone. Yay!

So that was enough progress for the down time that I had. Next steps - turning the phone on with T-Mobile (Sprint) if they do that. That would cost money though. I extracted proprietary blobs from the phone, so putting Google services back on might be something I do, if it is allowed. Figuring out if my Ubuntu laptop or desktop can unlock Motorola phones (did not try with the laptop). Modifying things in the build source code and reloading the operating system. Seeing if this can build on the 21.10 desktop. If not directly, then building with a 21.04 KVM on the 21.10 desktop initially. Building not just the OS image but the recovery image.

Also, a look at the Linux drivers and such for the parts of the phone (camera, screen, microphone, speaker etc.) is interesting, but as I am still so unfamiliar with all of this, that might be overreaching. Doing it with my desktop might be easier, but things like these Android devices are where the bleeding edge is.

Any how, I finally got to build and sideload the Android operating system with the LineageOS modifications. The LineageOS instructions say "It's pretty satisfying to boot into a fresh operating system you baked at home :)" and that's kind of how I feel.

[/android] permanent link

Tue, 30 Apr 2019

Database Android app

So 7, almost 8 years ago I released my second Android app, which was able to load Microsoft Access databases on Android. The first release was four days from idea to release. I improved it over time, and then stopped working on it once people seemed happy enough. Eventually it got a real competitor or two, and I left the field to them, unpublishing the, by then, out of date app.

Well as I noted two days ago, after 7, almost 8 years, I got past my second major hurdle in my spreadsheet app that I have been working on, on and off, for over seven years. But there is a lot of work to be done, and I may have bitten off more than I can chew, especially since I am doing this solo, and am busy with other things as well. I would need to put in more features before I thought of releasing my spreadsheet app on Google Play.

The spreadsheet framework I did is one that works well not just for a spreadsheet app, but for an Android database app too. My spreadsheet framework fixes many problems people had complained about with the old database app I had done, like ability to scroll.

There are some good Android spreadsheets around (although there are not many Android spreadsheet apps despite their popularity - probably due to the complexity of making an Android spreadsheet), which have a number of good features which my spreadsheet app does not.

However, for my new updated database app, which now uses my spreadsheet framework, even at this early stage, there are only about two real competitors. My updated database app is already at a stage where it is competitive with them. I have features their apps do not have, and they have some features my app does not have. I will be working to improve the app.

The database app makes a good test of the spreadsheet framework. That is the main reason for the database app release - to put it the spreadsheet framework out there and have real users bang on it and see if any problems develop that needs to be fixed, see if they make suggestions and so forth. Best to implement fixes and features in response to real world usage.

The app can be found on Google Play.

[/android] permanent link

Sun, 28 Apr 2019

Spreadsheet Android app

By June 17, 2011, I had published two Android apps - an app to verify US driver's license (the idea was suggested to me), and an app that could open Microsoft Access databases on Android. The database app had successfully leveraged an existing FOSS Java library, so when I was casting about for my next app, I saw that there was a library (POI) that could handle Microsoft spreadsheets. With what I now realize was enormous hubris, I decided that a spreadsheet would be my next Android app.

I wanted to make sure the library worked, so I did a simple app where I load an XLS spreadsheet and displayed some information in the console. It worked! Great! I assume XLSX will work, so I spend the next two weeks completely focused on building out the Android UI etc.

After two weeks of doing UI work, I decide to try to load some XLSX files. Uh-oh. Including all the libraries needed for that makes the Android Dalvik Executable file exceed 2^16 (65536) methods. And Android Dex files only have a 16 bit identifier for methods at that time. I do not know this, I find out the hard way (and the error message at the time was pretty obscure).

I spend some time trying to get around this and read about it, and finally throw in the towel, post the code on Github, and move on. Which was not a bad idea - I was new to Android, Android had not developed as a platform, and I released other apps which made me tens of thousands of dollars in profit.

In November 2014 Android replaces Dalvik with ART, and suddenly the 2^32 app method becomes more manageable. One thing Android programmers had been saying was that Google was suggesting use of the support library, and THAT had grown to over 10,000 methods. So Google suggested libraries that were slowly approaching a big percentage of that limit. Any how, they fixed it before it became a problem.

I was too busy for a few months to look into it, but then I jumped back into the app again. Loading XLSX now worked, cool! I spent a few months working on the spreadsheet. I was playing around with content providers at the time, so I modified it to allow loading of Android system SMS messages, contacts, call logs, calendars and such. I also tinkered with hooking it up to the Jackcess library, as my old database app UI was becoming obsolete.

But then I ran into another barrier - layout. I was laying out the spreadsheet from cell A1 and calculating width and height from there. Which means jumping to say cell AAA9999 would mean a lot of calculation before the jump.

So I put the app aside for another three years and change. Then on November 24, 2018 I take a look at it again. I start from scratch working on just the layout manager part. I rewrite the app in Kotlin, and pull in what is needed from the old Java app. I work until December 14th and get stuck again on the layout manager that I have already spent so much time on. It does is now scroll to the top or left smoothly. If I scroll fast, cells do not get filled in. I also am skipping rows and columns when scrolling.

I have some time on March 19th. I know the smooth scrolling to the top or left is probably easy to fix. So I do fix it, great. Now left is skipping cell layouts, and skipping ahead for whole rows and columns, going from P and Q to say T and U, with no R or S column. I figure those are difficult, but I have the easy one done. So I make an effort to not skip rows and columns. It takes an effort - but I do it! Now just one more left - the cells not filled in. I decide to clean up the code now some though, as it is getting unwieldy. I combine functions, I name things clearly, I create enum names for clarity.

I debug it and see what is happening. This takes a while as well. Then - it works! It had been a one-off error, it only happened when the number of pixels I scrolled was exactly how many pixels of laid out cells were off-screen. Cleaning up the code helped make it easier to find the problem. Great! Wow, I thought I was just taking another stab at it, but between March 19th and April 3rd, I fixed all three errors, including the two big ones.

So from April 3rd to April 23th I pull in the features from the 2015 code, translating into Kotlin, and sometimes improving them. Then I pushed the code to Github. From April 23rd to April 27th I put in code to handle jumps, some ODS spreadsheets, ability to search, multiple sheets over multiple tabs, ability to handle incoming spreadsheet intents, and then handling XLS/XLSX row heights and column widths, as well as some other things. Some of this code was pulled (sometimes rewritten significantly from the 2015 code).

Aside from the old XLSX and Dex problem, the layout manager had been the other hurdle. Which seems solved now. Lots of the little things needed can be put in.

Right now I am concentrating on proper spreadsheet viewing. Editing and saving I am not dealing with yet. The viewing features are the priority for the project right now, not editing and saving. I thought of doing an SMS backup app, but I think Google is getting strict about that stuff now. I already have the code written to do it. I have a git branch on my workstation not pushed up that has basic save functionality, but I want it to be done right. And if there's so many viewing features to do beforehand, those take priority. Editing and saving are way on the back burner, all other features (viewing spreadsheets) are what I am doing now.

I might check out the Jackcess library again and consider putting out a modern Android app that can load Microsoft Access databases in this framework. That might make sense as the first release of this current framework of code. Right now I am busy with my day job though, so that might be a bit.

Other than editing and saving, pull requests and patches are welcome. Please read the README with regards to that.

[/android] permanent link

Thu, 02 Aug 2018

Developing an Android app - Wallpapers app - part 6

This is my 6th post in a series about the Wallpapers Android app I developed that is on Google Play. The first blog posts describe how I planned it, started developing it, released it, updated it, and if you want a view of the entire development process of an app, you might want to start reading from the beginning.

This blog post is about the last version I released, where I translate it from Java to Kotlin, and from MVP architecture to MVVM architecture with various Jetpack components. The app is a free one where people can browse through possible wallpapers for their phone background and/or lock screen, and then download and set them.

In my last post I talk about how Google favored Model-View-Presenter architecture in early 2017, and in the midst of my app writing, changed the roadmap at I/O 2017 to Kotlin and (then-beta) new Android architecture components. Any how, I ignored this then and kept plowing ahead with Java and MVP without these new components for this project. I did pick up the Android architecture components (Jetpack) and Kotlin on other projects though. I was busy doing these and other things, and any how, my app stability and functionality was pretty solid any how. Any how it was a little over nine months before I dived back into the app code, rewriting it in Kotlin (and Jetpack).

The Google Android sample apps that use Kotlin and Jetpack components - the sunflower app, the Github app, and particularly Google's Reddit network paging app, were apps I studied as I learned Kotlin and Jetpack. I actually rewrote the networking paging app from the ground up, removing anything extraneous, until it was at under 1000 lines of Kotlin. I also saw how the various classes worked together as I built it.

I used this stripped down networking paging app I culled as the framework of the new Wallpapers app. The existing Java language, MVP architected Wallpapers code had components I sprinkled into this rewritten app which uses Kotlin and Jetpack. It was in June of this year I decided to rewrite the Wallpapers app in Kotlin, using modern Jetpack components like LiveData, ViewModel, Paging and so forth. I want to make the app better and keep it up to date, and I also want to hone my somewhat new Kotlin and Jetpack skills and a slightly more complicated challenge.

June 9, 2018

In Android Studio, create new Android Kotlin app on the basis of my stripped down version of Google's Reddit networking paging app. Create standard Android Kotlin skeleton. Then add ServiceLocator (no Dagger in this app yet), ViewModel, Repository. Then add Glide to load the images. Then add the RecyclerView and ViewHolders. Some of this stuff I had in the old app, which makes things easier.

June 10, 2018

Add a (Room) Dao. Add a (Retrofit) web API. Add more logic to the repository. Load the first url. It loads! Now I need to parse it. Now translate more network and database logic from the old Java MVP app to the new Kotlin MVVM app. I make good use of Android Studio's Java to Kotlin code converter, which does not always work (and even when it does, is not always exact). Work on the ViewHolder. Now load some images. They load! Oops, some of the filenames have UTF-8 and URL encoding quirks. Translate the old code that deals with that to Kotlin.

June 11, 2018

Now redo the frame. Cool, load three images in a row, just like the old app. Add the app icon. Since Android 7.1 round icons have come to the fore, I might have to think about redesigning my icon. OK, a grid of thumbnails is loading, just like the old app. On the old app I could select a thumbnail to see it in more detail, as well as find out more about it, as well as allow me to download it and, if I want to, set it as my wallpaper. So I start working on that detail page in this app. I start with loading the thumbnail from the grid in the new page with Glide.

June 19, 2018

OK now I have a detail page with an initial thumbnailed picture up top. I put in Retrofit calls to get the detail information, and some of the logic to send that state information to the UI.

June 20, 2018

I put in more logic to pull from Retrofit to the UI. I introduce data binding into Gradle, the Activity and the XML. Ah, my Dao SQL calls can be improved. Android really is getting full stack - I can use my server SQL skills locally in the Android app.

June 22, 2018

OK now dealing with asynchronous threads, LiveData etc. The database takes time to get the information, I have a LiveData object in the repository that posts the state information to the ViewModel though, so that the different parts of the app can know when the data is ready.

June 24, 2018

I make the links on the detail page clickable (to web pages). The Wallpaper class which the detail page uses has a number of String variables, and the transformations are a little kludgey, but it works. One of these variables are wallpaper categories which were missing until now, start putting it in.

June 25, 2018

OK - so now we're adding the more complex attributes of the wallpaper. We did the easy ones early, categories was yesterday, now we're adding the wallpaper's licenses. OK cool. OK, now we move on to downloading the wallpapers. Works! OK, now we set the wallpaper as a background. Works (tentatively)!

June 27, 2018

OK, so the other app opens as a three tabs, one selected at a time. So put that in. We have a grid Activity, so make it a Fragment like the other app. OK. So the first two tabs are similar, the third tab is a list of our category types for wallpapers (nature, flowers, cats, space, food etc.) So put that list in.

June 28, 2018

The old app had progressive thumbnail loading - when we go to the detail page we first load the small grid thumbnail, then we load a more detailed thumbnail over it. So we put that in. It seems simpler here, Glide probably improved.

July 1, 2018

We have been loading the recent tab, add a real popular tab which loads the popular wallpapers. I allude to how I choose which wallpapers are popular in an early blog post, although it has been refined since. The method to choose which wallpapers are popular is look through the logs and see what wallpapers have been downloaded by a unique IP, and then scoring them, but using how many days since the download as a score in an exponential decay overall score. It works pretty well, I think it puts me over all the other wallpaper apps actually in that one regard. Perhaps I'll go into more detail in another blog post.

July 2, 2018

OK so now when you click on one of the list of categories, a category page actually loads. The Retrofit logic, Room, ViewModel and UI stuff is there now. Also start sending version and language information to the (test) server, as well as the Instance ID (which we do not initially load in the main thread!)

July 3, 2018

So now the paging library deals with loads. It is a little different from my old hand-tuned code. My old code did a small initial load, so that even on slow connections, something would appear on the screen, and subsequent JSON loads were larger. Also, I did a lot of pre-loading so that scrolling went smoother. One big difference is the Android UI knew how large the grid would be in the old app, and now it does not know until the last page is loaded. So that is a factor in slowing scrolls down.

The old Java code used Java TreeMap to sort the category list for different languages. I send a "java.util.TreeMap()" to a Kotlin map and it works in the new app. I'm not sure if there is a more Kotliny way to deal with this, but there's another month's worth of work ahead for this, so that question goes on the would-be-nice-to-know list.

I put the dev and production URLs for the REST API in a saner, central place. I improve the network error message (in the old app it was a dialog window). I upgrade various libraries.

July 6

Upgrade Kotlin 1.2.50 to 1.2.51.

July 7 - 27

OK, I had a list of to-do's, and most of them (except the simple ones to leave to the end, like bump app version number, turn on production URL, put in ads etc.) are done. Two tougher ones remain - making sure I poll the web API periodically, and keeping my place on the grid when I click on a detail and back into the grid (something other example apps like the Google Sunflower app do not do).

So for having a robust scheduled web poll time - I go down some blind alleys, like the way Google's Github app does it. It is not robust enough for me. I also run into all kinds of headaches, like Room deletions are not working for me. They do when I add onDelete CASCADE ForeignKey parameters to various Entities though.

I am storing the last web poll time in Room. Not sure if it is 100% necessary, and it may be overdoing things, but I'd rather know it was being polled then chance it not being polled. Any how, all seems OK but I do not fully understand this and should revisit it later.

July 28

The web poll code is done. Yay. Now I get to work on keeping my location in the grid.

July 29-31

Keep location in grid after clicking in detail. I do it essentially the old way I did it before, except the old way I did not do a scroll until the presenter notified the UI that the grid was populated, and now I have an observer in the UI which notifies me when the grid is populated. So it works, yay.

August 1, 2018

OK I started this about two months ago, spent about three weeks (when I had time) on web polling, and then three days on scrollToPosition in the new app. So I'm a little antsy to publish. I QA and QA things and they look OK. I make a release and send to another app, signing both the jar and the whole APK, as I want it to work with old and new devices. Things look OK so I send it up to internal testing on Google Play.

The results come back. A crash. I look. Some of the code I did an automatic Java to Kotlin translation of did not come out exactly right. I do a non-null assertion (!!) where I should not. I had a null check later in the old code, so it was working before. Any how I redo the code, QA, especially in what the changed code deals with, and upload again to Google Play internal testing. Google is still running it through its testing devices.

So we'll see how this goes. The app was fairly solid before in terms of stability. One problem users had sometimes in the old app was with the Environment.getExternalStoragePublicDirectory() call. One problem I had is I knew that call was failing but did not know why. I rolled my own network crash report system and discovered it was usually because the mkdirs() call on the object returned from that call was failing. Which I still have to figure out. Other than that, things were fairly solid.

[/android] permanent link

Fri, 27 Jul 2018

Python Scikit-learn and MeanShift, for Android location app

I am writing a MoLo app (Mobile/Local) which might even become a MoLoSo app at some point (Mobile/Local/Social).

Any how, the way it works right now is it runs in the background, and if I am moving around, it sends my latitude and longitude off to my server. So I have a lot of Instance IDs and IP addresses, timestamps and latitude and longitudes on my server.

How to deal with taking those latitudes and longitudes and clustering them? Well I am sending the information to the database via a Python REST API script, so I I start with that. I change the MariaDB/MySQL calls from insert to select, and pull out the latitudes and longitudes.

The data I have is all the latitude/longtitude points I have been at over the past four months (although the app is not fully robust, and seems to get killed once in a while since an Oreo update, so not every latitude and longitude is covered). I don't know how many clusters I want, so I have the Python ML package scikit-learn do a MeanShift.

One thing I should point out is that in the current regular fast update interval for the app, I only send a location update if the location has changed beyond a limit (so if I am walking around a building, it will not be sending constant updates, but if I am driving in a car it will).

Scikit-learn's MeanShift clusters the locations into four categories. Running sequentially through the clusters, doing a predict, the first category starts with 2172 locations. Then 403 locations for category two. Then 925 locations for category three. Then another 410 locations for category two. Then 4490 locations for category one again. Then 403 locations for category four. Then 2541 locations for category one.

The center of the first cluster is about half a mile west and a quarter mile south of where I live. So I guess I spend more time in Manhattan, Brooklyn and western Queens than in Bayside and Nassau.

The center of the second cluster was near Wilmington, Delaware. The center of the third cluster was Burtonsville, Maryland.

It's due to the aforementioned properties of the app (I only send a location update if the location has changed beyond a small distance limit) that I had 2172 locations from March 12th to April 2nd in one clustrered area, and then on April 2nd - 1738 locations in two different clustered areas. On April 2nd I drove to and from my aunt's funeral in Maryland. That trip created two clusters - one in Wilmington for my drive there and back, and one in Maryland where I drove to the church, the graveyard and to the lunch afterward.

So then I have another 4490 location updates in cluster one, until those 403 updates in cluster four. The center of that cluster is Milford, Connecticut, and it revolves a trip I made to my other aunt's house near New Haven, Connecticut from May 25th to May 26th. Then it is another 2541 updates back in cluster one.

So...I could exclude by location, but I could also exclude by date, which is easier. So I exclude those three days and do MeanShift clustering again. Now I get six clusters.

Cluster one is centered about five blocks east, and slightly north of where I live. It has the bulk of entries. Cluster two is centered in east Midtown. Cluster three is centered near both the Woodside LIRR station and the 7 train junction at 74th Street. Cluster four is centered in Mineola, Long Island. Cluster five is centered south of Valley Stream, with 200 updates in three chunks. Cluster six is in Roosevelt, Long Island and only has one update.

MeanShift is good, but I may try other cluster types as well.

[/ai] permanent link

Sat, 23 Dec 2017

Google VR, Daydream, Unity

So on February 13th I bought a Pixel phone as well as a Daydream VR headset. I set up the Android Studio "Treasure Hunt" sample, and modified it slightly to change the controller behavior and other things. This was all rather new, and there was not a lot out for it. I went into the Daydream VR store and downloaded some free games (and one paid one) and saw almost all of them were made from Unity. I saw a lot was involved making VR from scratch in Android Studio, and that it was also relegated to just the Daydream VR headset, so I put it aside and worked on other things.

For most of this year I have only had an Ubuntu Linux laptop and desktop. I read several months ago that Unity had a Linux beta, but then read that it did not export scenes to Google Daydream VR. However, I have had a Mac notebook on loan since October, and I knew Mac could export to Google Daydream VR.

So on December 10th, I downloaded Unity to Linux and began looking at it. Unity has a tutorial called "roll a ball" where you make a game that rolls a ball around, picking up spinning cubes. While making the game, you're learning about the various aspects of Unity, writing short C# scripts and so forth. I finished that, and then had a nice little game. Once nice part is it was exportable to a number of OS's - Android, iOS, Linux, Mac etc. I played the game on my Linux laptop, and then played it on my Android.

Then I looked at the Google Daydream VR sample on Unity Linux. It downloaded, and I could edit the scene and preview it on Linux. Then I tried to export it to Android VR. No go. Well, Unity had warned me before I downloaded it, but I have it a try.

So I pulled out the Macbook I have on loan, downloaded Unity, downloaded the Google VR scene, and sent it to my Pixel. I put my Daydream VR headset on and, bam, I am in the scene I just compiled

I do some minor modifications, and they pop up in the scene. I program Unity, put the headset on, am in the world I just made, want to make a change, pop the headset off, back at the keyboard, put the headset on again and am in the changed scene. Very cool

At the local Android Developer Meetups is a fellow named Dario who works for HTC. He has been working with VR a lot. He thinks the interesting thing will be the building you can do within VR. One example I have seen of this is Medium, where you are molding a form together with your controller. In school I learned that one definition of an embedded system was a system that could not program itself. If you could change the world you were in from within using Daydream, Oculus, Vive etc., the scene would not be embedded.

In the XScreenSaver source code is a DXF file to build a robot, I popped it into Unity. It was way too big for the base Unity Daydream sample app scene. So I scaled it down a bit. Better.

But it was all one color. So I looked at the winduprobot.c to see what was being sent to glColor3f for various robot parts. I dropped them in as materials and now the robot was colored properly.

But the DXF was only half a robot. So I looked in winduprobot.c again and mirrored or otherwise convoluted various parts so that the body inside, body outside, leg, and arm-part would be mirrored on both sides.



So that is where I am now with it


It is pretty cool to be able to drop 3d models into the world, write little C# (or Javascript) programs for the world and so forth and have it all pop up, and to be in that world.

In the future I might look into Godot Engine which is getting some AR/VR support, or look back on the Android Studio VR modules, or into other things. Unity is a good, easy base to survey these things from though.

[/vr] permanent link

Sat, 19 Aug 2017

Developing an Android app - Wallpapers app - part 5

This is my 5th post in a series about the Wallpapers Android app I developed that is on Google Play. The first blog post describes how I started developing it, this is about the last few versions I released.

In early 2017, Google favored a certain type of Model-View-Presenter architecture for Android apps. Google promoted this as the way Android apps should be written.

As my last blog post notes, on April 23rd, 2017, I began refactoring the entire app to fit more into this architecture that Google was promoting. In the three weeks after April 23rd, I did a large amount of work rewriting the app in this manner.

Then on May 17th, Google I/O happened, and they announced a whole new way of architecting apps that sort of junked my last three weeks of heavy work somewhat. C'est la vie! Welcome to Android development. Also, as typical for Google, it was announced as a beta, so the production readiness of it was questionable. After talking to people and reading thoughts from Android experts, I decided to press on with refactoring to this now deprecated architecture, with thoughts of perhaps refactoring it again to the new architecture model at some point in the future.

So my previous blog post focuses on the release of this majorly refactored code on June 13, 2017. This blog post focuses on the post-release of that. First, fixing errors I saw pop up on the release of that code. Also, other improvements I have made since that release.

June 15, 2017

I update the Google services JSON for Firebase (and Admob), and upgrade Firebase to v.11.0.1. On the Admob backend, ads for the main page are distinguished from ads on the category pages, so I make the distinction explicit in the ads on the app as well. I display a Toast when a download completes successfully. I also deal with when an object comes in as null in places where I am not 100% sure why the object would ever come in as null, I have to check into that more.

June 16, 2017

Release app, release 2.7.3. I do partial releases to 1000 users at a time of the new code.

June 19, 2017

When the Android client connects to the JSON API, each client sends a unique InstanceID. The main purpose of doing this is to track down errors, if people are having problems with the app, we want to have as much information as possible in order to try to fix the problem. However, I am seeing ANR (Application Not Responding) errors, as some Android devices are freezing up while calling the Google Play Services code to get an InstanceID. So I put the (not very essential) call in an AsyncTask so that that freeze-up does not happen.

There was also a problem of network requests going out, the view/presenter being reset and sending out a duplicate network request, then the old request coming back, and then the new duplicate one. The simplest thing for me to do is to discard the old ACK, so that is what I do. I do a release and start rolling out this new version.

June 21, 2017

More nullness to deal with. Retrofit Response bodies are coming back null. Have not been able to reproduce this in QA yet. I rewrite the code to display the "network failed" dialog when this happens, and have to do more QA to see how to reproduce this problem which is happening in the field.

It was not scrolling all the way to the end of the wallpaper grid in some cases, I modified the code so that would.

In the previous blog post, I mention one thing I punted on with the big June 13th release was ranged notifies. When a JSON would return new wallpapers, I notified and refreshed the entire adapter, which made the images reload (which made the screen blink) every time a new JSON came back with new wallpapers. As June 13th approached I was getting antsy with how long the refactor and QA had taken and decided this annoying blinking was something I could live with and deal with later. As the release went out, I take a look at it now. I see that it is not that difficult to send a notifyItemRangeChanged to the adapter, so I do that. The reloading and blinking is now gone. Yay.

I also make sure some assertions are true before loading more JSONs for the recent and popular wallpaper grids.

June 22-24, 2017

I am running into one of those hairy Android problems. There is an older and newer method of sending images off in an Intent to be set as wallpapers. The problem is it is not exactly clear when the old method should be used, and when the new method should be used. For the app being sent to, which method to use can depend on not only the app version, but the Android version, and other factors. Also, this new method has problems to be dealt with as well - the old version handles things like JPG's which have filenames which end with a capitalized JPG, but the new method does not (without some rearranging any how). I don't really fix anything, but Google+ is now excluded from setting wallpapers as it only works with the new method, which I have yet to implement (outside of test functions).

June 26, 2017

Some of my competitors have a nice feature graphic for their Google Play store listing. Mine is not so great. So I put together a nice 1024x500 feature graphic. What I do is find 14 nice wallpapers which go together nicely. Then I make 146x250 thumbnails of them, which are a ratio close to that of a typical Android phone. With the exception of the 4 wallpapers on the left and right edges, which are all 147x250 size.

I never did a store listing experiment before so I do one for the new graphic. I start by doing a global experiment, but the global experiment is constricted. So I do it by language - both English and French. I run the experiment for 11 days. There are a few hundred downloads but no big statistical difference is seen. So I end the experiments and serve everyone the new graphic - it doesn't seem to have harmed anything anyway. Subsequent looks at that statistics yield very little as well, it had no major affect on download conversion in either direction.

June 30, 2017

Deal with Retrofit Response being null for detail responses, just as I had for Retrofit Response being null for grid responses on June 21st. As a preventative measure, I have the category page deal with null Retrofit responses as well, although I have not yet seen them in the wild.

July 7, 2017

Even though the images on the category page have been shrunk to 200px, they still cause OutofMemory errors on some devices. So I push handling of the image loading to Glide.

July 8, 2017

Upgrade Firebase etc. to 11.0.2

July 9, 2017

People in the wild are crashing on a null view object in DetailFragment. I put in a kludge to deal with this, but the real problem is that object should not even exist in the first place, and DetailFragment has become too spaghetti code like as it has continually accreted code to try to deal with all the various tasks it has to do (permissions, load two thumbnails, load JSON and description, download and set wallpapers). A few weeks later I will rewrite this class and make it cleaner.

July 10, 2017

Production release of new code.

July 13, 2017

Usually I am testing this code on wifi. When I test it on a cell connection, my connection is usually good. So I don't have a lot of QA from less robust areas.

In New York City there is a local Android developer meetup. I go to it and show someone my app. The cell connection is not robust though, and embarrassingly, my app has problems as I show the app to someone. The problems go away when I go home to my wifi and good coverage area.

I go to a $150-a-month co-working space I have access to, where the cell coverage is not always robust. I begin to see the problem again. When the phone is on wifi, when in the fragment's onResume I ask the connectivity manager if the network is connected, it immediately says yes. However, when the question is asked while the phone is on a spotty cell connection, the answer within the first 20 milli-seconds to whether the network is conncted is "no". Usually about <20 milliseconds in, a system broadcast comes in that the network is connected.

So now in onResume, I do a network test, wait 100 millseconds (I tack on 80 milliseconds), then do a second network test. I only listen to the results of the second test. This seems to solve the problem, I get much less false "network disconnected" messages. Another Android programmer told me I must be imagining all of this, but this is what happened for me. Perhaps his phone never has this problem.

July 16, 2017

When I show the network is disconnected dialog, I have been assuming people were always clicking the OK button. I put in code to deal with every manner in which they might dismiss this dialog.

July 18, 2017

I add the timed network connection test to the category grid.

July 19-22, 2017

I write some JUnit and Espresso tests for the app. From Android development on Eclipse to now there have been many changes, but I see that it is very easy to write tests now. It just takes a few minutes to add a JUnit test and an Espresso test and then run both.

July 25, 2017

I QA the app on an ICS (v. 4.0) tablet. Oops, the permissions for downloading are not correct. Manifest.permission.READ_EXTERNAL_STORAGE was not introduced until API 16. I redo permissions so that the small amount of devices that still come in that are v4.0 API 14 and API 15 work.

Also, a few people here and there are having IllegalStateException errors when doing a DownloadManager.Request on the setDestinationInExternalPublicDir method. There are three possible IllegalStateException's they may be having, and I don't know which one they are generally having. So I set up a method to test for this and upload data to my bug reporting server if the problem is seen. I will be looking into this more as reports come in (although so far, people have been having two of the three possible errors, pointing to different causes).

People are using new licenses on Wikimedia Commons so I add blurbs about those new licenses to the app. There are enough wallpapers in the Sky category (60) to put it into the app, so I do so and put a relevant drawable in for it as well.

Also, in unexpected behavior news, some people click the download button 10 times in a row and download the wallpaper 10 times. So now I have it download on the first click and ignore subsequent clicks.

July 26-30, 2017

More JUnit and Espresso tests.

August 1-12, 2017

Dealing with that problem with people clicking download 10 times in a row on the Detail page, I want to add more state to the Detail Fragment, but I take a look at it and see how much spaghetti code it has. Dealing with loading the existing small thumbnail, and then a larger thumbnail, dealing with grabbing and displaying meta information, dealing with permissions, and downloading and setting - the code has accreted and is now fairly convoluted. I do a JavaDoc generation of the project and look at the detail code in the JavaDoc and it is not pretty. I also manually put together a Graphviz of the DetailFragment method call graph and it is convoluted and confusing.

Instead of accreting even more functionality to an already convoluted class with a lot of spaghetti code, I decide to refactor the class. I start from scratch and cut and paste the old code as needed.

One of the first things - as I mentioned on July 9th, I was keeping the view object around in the Fragment, which was not a good idea. So I dumb that and now just getView() when I need the Fragment's View.

I also have a variety of Strings and such scattered about with information on the wallpaper images and the wallpaper metadata. I consolidate that into two classes - Wallpaper and WallpaperMetadata.

The code had just accreted and had kludges and was calling things unnecessarily. I streamline to a sensible directed graph. When the fragment resumes, I load the small thumbnail, and have the larger thumbnail laod after that. I also have another directed graph where a JSON of metadata is pulled and then displayed on the page. The third directed graph is based on the download and set buttons. If pressed, I check for the proper permissions, and based on that, download, and if requested, set the wallpaper.

This is better than the previous code, which had unneeded dependencies in the image load and the metadata load, and other unneeded dependencies. Everything is now off in its own self-contained silo of functionality.

The network failed dialog is still popping up when it should not sometimes (after onInstanceState being called for instance), so we deal with that as well.

August 13, 2017

Somone with a small, low density phone gived the app a 3 rating. I make an emulator for a phone of this type and test it out. I see the word categories on the tab appears in a font which is too large, so I decrease the font size on small, low density devices.

August 15-17, 2017

More Detail fragment refactoring. Rewrite JUnit tests for the Detail presenter, as I modified the Detail presenter as well.

On the server side, since I'm a full stack programmer [at least according to the definition I read someone give online somewhere of what a full-stack programmer was :) ], my Python script which determines which wallpapers are popular was running slow because it was taking too long to get rid of duplicates. One reason dumping duplicate IP/wallpaper downloads is important is as I am using Android DownloadManager now, downloads now are more broken up and - duplicated. I solve this problem by creating a unique set, and seeing if unique data structures are in that set or not. Any how, now it takes five seconds to process the 145,000 downloads I have, whereas beforehand it took a few minutes. I had identified the problem of uniqueness beforehand, but surprisingly it took me less than an hour to solve the problem.

Back on the Android client side, and looking at the small, low density emulator, I see that some foreign languages use fonts which are too large for the download and set wallpaper buttons. So I shrink the font sizes accordingly.

I sent out to do translations for the app, its Google Play blurb, its ads, as well as the descriptions of a few of the more popular wallpapers. The app is already in English, Spanish, German and French, I am now doing Czech, Russian, Polish, Portuguese, Korean, Italian and Dutch. Those seven languages were determined from two factors - one, the number of images in Wikimedia Commons that were in those languages, and two, the amount of ad revenue which I could generate in those countries. If the cost of a translation and a small ad campaign could be recouped within a certain time period, then I opted to choose that language. There aren't many Korean language images in Wikimedia commons, but there is so much ad revenue in Korea that I paid the $30 to translate it any how. Insofar as an ad campaign there and if I'll have enough images in that language to fit the bill, I'll deal with that when it comes up. My app's multi-language capabilities are already superior to that of some of the leading wallpaper apps.

There is not a lot to do now. The refactored version has been out for over two months and all the major bugs have been fixed, except for a few infrequent and hard to track down ones. I'll just try to keep adding three or so new wallpapers every day, as I have been doing. This will give more of a selection, and fill out the categories more. Once I reach some threshold with the wallpapers, I will put in search functionality so that people can search for the wallpaper they are looking for. That would be the next big change for the app.

So, now that necessary upkeep on this app has dwindled (hopefully) to a few hours a week for the foreseeable future, I'll start pulling out some of the other irons I have on the fire...

[/android] permanent link

Tue, 13 Jun 2017

Developing an Android app - Wallpapers app - part 4

This is my 4th post in a series about the Wallpapers Android app I developed that is on Google Play. The first blog post describes how I started developing it, this is about the last few versions I released.

So I started working on this app one year and three months ago. I released version 1 one year and one month ago with 335 wallpapers. I am in the middle of a staged rollout of my most recent release, which was a fairly significant one, as I have been working on the latest release for three months without any intermediate releases since then.

The main thing I did was made the app more explicitly in line with what Google suggested. Google suggested that Android apps be built with certain architecture types. Two of the popular architecture models they suggested were MVP and MVVM. As the MVP (Model-View-Presenter) architecture was the simplest architecture they suggested, and fit with what I was doing, I went with that.

Of course, right as I was finishing up with all the work I had done following Google's then-current best practice suggestions, Google I/O happened and Google announced a whole new official architecture framework. So my app's architecture was, in a sense, obsolete before it was released. I considered dropping all my recent work and using the bleeding edge new official architecture suggestions from Google. My thoughts though were that it was yet untried, and other Android programmers felt the same.

In addition to a more explicit Google-blessed architectural model, I decided to make the app more in line with what most Android shops were doing. Although the Android Universal Image Loader library has worked well for me, it has not been updated at all in eighteen months, a long time in an Android environment where new Android versions are coming out regularly. I switched to the Glide library, as it is popular and people like it. I could just as easily have picked other popular Android image loading libraries such as Fresco or Picasso, but Glide suited my needs better.

I also changed other things. GridView went out, RecyclerView came in. I used Retrofit for JSON loading, and GSON to convert the JSON into objects.

A few things prompted these changes. One was that my method of dealing with my main data structures was not so great. Particularly in giving access to the data model all around the app. I had known that my existing methodology was problematic - but it did work.

However more of the newest Android devices (Nougat) were coming online. With my number of wallpapers growing, as well as Nougat's new constraints, I began seeing TransactionTooLarge exceptions when people scrolled down to the bottom of what were now over 1300 wallpapers.

Another reason for the major refactor is just that I had been working with MVP architecture, Recyclerview etc. in other apps and wanted to bring all of that good stuff into this app.

Any how, here is my timeline of work. As I said in previous blog posts, this is to give people some idea of what goes into programming an Android app.

From December 24, 2016 to March 30, 2017, I am just doing regular updates. From April 23rd, 2017 to now, I am redoing the app in the MVP architecture, as well as making other large changes.

December 24, 2016

These apps are fairly dependent on network connectivity. If the network is not connected, I pop up a dialog fragment. But it pops up while the activity is finishing, which it should not do. So I patch that.

January 13-16, 2017

The images I have on my categories page are larger in file size than they need to be. I shrink them down to 200px each. Also, I have brought in more wallpapers over the past months, and choose better examples than existing to illustrate each category.

March 15, 2017

My big problem on the first release is when I went out to test it and realized it did not work on Marshmallow phones due to Marshmallow's new permissions model. I had done a kludge fix for that ten months before. In February 2017 I bought a Pixel phone running Nougat. While using my app on it and doing some informal QA, I notice there is a race condition in the Marshmallow permissions code, so that it does not always take effect. So I patch that. This is why it's good to have access to a lot of devices for Android. I upload the new version with this fix to Play, which is my last app update on Play for three months (but not my last update to the app, as I am putting about three new wallpapers a day online behind the API accessible to the app).

March 30, 2017

The aforementioned network disconnected dialog fragment is being activated while the onSaveInstance method is running, which should not happen. So I disable that as well and patch it. As it happens rarely, I don't update the new code to Play. One reason is I don't release the fix is I didn't anticipate that I would still be working on the next release all the way into June. So I thought the fix would go in earlier. It is not as major as the Marshmallow fix any how.

April 15, 2017

I go down a blind alley. I try to do a kludge to fix the TransactionTooLargeException that Nougat devices are seeing. But it is not possible - some work will be needed. And since some work is needed, I might as well do it right, and do as much work as is needed.

April 23, 2017

This is the start of work that will not be published on Play until June 12th. I decide to model the app on state of the art Android architecture for Model-View-Presenter.

The sample app for it is on Github. The main documentation page has a paragraph which is very confusing, until I realize that it contains a typo, which I send a pull request to fix. This is not an encouraging start.

May 2, 2017

GridView out, RecyclerView in. Wallpapers are now over 1000, so we need to start recycling views better if users want to scroll down into infinity.

Also, Android Universal Image Loader always served me well, but it has not been updated for eighteen months, and image libraries like Picasso, Fresco and Glide are what the majority of shops are using now. I choose Glide, which has been a suitable choice so far.

May 3, 2017

I start working on the Presenter part of the Model-View-Presenter. I get how this works - the View Fragment and the Presenter both implement off of a contract interface. This way, transactions between the View and the Presenter are made very clear (and testable).

May 6, 2017

I use Retrofit to grab the JSON, and GSON to turn the JSON into POJOs. Retrofit has a ready-made GSON converter. This all makes the code cleaner.

May 13, 2017

I fiddle with Glide's disk caching strategy, so that thumbnail images will tend to only have to be downloaded from the server once.

May 18, 2017

Glide has lots of animations, plus the Recylerview blinks when the data set is changed. I work to minimize this. This is still not totally done, as I have not taken advantage of ranged data notifications to the adapter yet.

June 3, 2017

Trouble with FragmentPagerAdapter. Sometimes a new Fragment is created for an existing tab, whereas the old one comes back to life as well. I start dealing with this. I still don't feel it is totally dealt with, although I can not see any problems it is causing now. I try lots of things with retained fragments, FragmentStatePagerAdapter etc.

June 7, 2017

The code from April 23rd to June 3rd was getting a little convoluted, so this refactoring gets a refactoring. I try to take out all the little kludges to get things working and streamline things sensibly.

June 9, 2017

A real breakthrough. The main data structures are resident in the Model repository. When the app starts up, I pull a reference down to the View's adapters of the relevant needed data structures that reside in the Model repository. That is the first and last time the data structures are referenced - on subsequent updates, all the actual work is done in the Model repository, and the current View adapter is just sent calls that do a notifyDataSetChanged. Very clean (cleaner yet would be ranged notify to the adapter).

June 12, 2017

The todo list is getting shorter and shorter. I decide to punt on ranged notifies, even though it causes the grid to blink on data notifies, particularly on my oldest Android device.

I want the app to open, to do a small JSON grab of the most recent wallpapers, and for those images to be put in Glide and loaded. I want the user to quickly see something. That is the priority, everything else follows. So the first JSON load does not send an app InstanceID to the server. Previously the order had been first JSON pull -> load images -> load InstanceID -> do second JSON pull. Now I do the first JSON pull, and kick off a Runnable to send the InstanceID to the Model repository. Retrofit grabs JSON without InstanceID's until it loads. So the user is not inconvenienced. It works out well. The app is architected well enough, and with clean enough code that these little extras don't really affect things. Timely InstanceID's are nice to have, but not critical.

Why do I send InstanceID's to the server? Because it helps with bug tracking. Some users are having a problem, but tracking by IP does not cut it as their ID's change a lot. Even tracking by device does not cut it as some devices are fairly common. If I get a low rating on Play on a certain day, I look through the server logs for that device type, country etc. Two people who gave me a one rating loaded the JSONs, but no wallpapers, which helped me track down a bug.

So we're coming into the home stretch. I do a production build. Oops. Guava and Firebase libraries conflict. That's simple enough, I don't have much Guava code in the app. I rip the small amount of Guava code I have out.

I QA on my devices. I should probably do more QA, but it's been three months and I am antsy, and this can go on forever. I thought I was releasing a few weeks ago, but QAing kept catching problems. So I release to alpha, and then beta. One of my beta testers says it is all good. The Google Play developer console automatic tests go through fine. So I release to 25% of my users. Later on, I go out. I check the app on a public wifi network which is flaky. Oops, my detail page is messed up. The detail image appears, then disappears. Sometimes it is replaced by a better image a few seconds later, sometimes it is not replaced at all.

June 13, 2017

I fix the error. The higher resolution thumbnail is loaded to a file by Glide, and when that is all copacetic, it refreshes the existing lower resolution (RecyclerView grid) thumbnail, with the existing ImageView's Drawable serving as Glide's placeholder. This seems to work. I QA it for a bit and then release - to 50% of the app users.

There is still more QA I want to do. My main thing right now is that the app is functioning properly. I am most concerned with how the logic is dealing with network latency and flaky networks. As well as other problems that might crop up. Once I feel the app is mostly stable, I can concentrate on other enhancements. Of course, a regular addition of new wallpapers will go alongside this.

It's been a year and three months, and I haven't really promoted the app heavily. It has over 3500 active users, and a 4.3 rating, but I am concerned with the users which give it ratings from 1 to 3. I am concerned with bugs like TransactionTooLargeException. I am concerned with users who have more latency than I deal with (I do some things in QA to test this, but can do more). On top of these stability questions, the app only has 1371 wallpapers, whereas the main competitors have many more. If the app is stable, I can concentrate primarily on adding more wallpapers. After enough wallpapers are added, it would make sense to put in search functionality, which is the main emergent feature which the app could use.

So I will see how this update fares, perhaps do some more small fixes, and if things are going well, may start ramping up the marketing budget some. I previously was targeting English speaking countries and Spanish speaking countries. Currently I am targeting French speaking areas, and will soon be switching to primarily targeting German speaking areas. If everything is stable, and a few new fixes go in, I may ramp up marketing efforts, even if it is just to see how users respond to the app.

I should mention in closing that in April, another effort went into the app framework. I paid someone else to pick three wallpapers a day throughout April. I also paid a Python programmer to speed up the process I had to thumbnail images I had selected. Both people did a good job, and I may work with both again.

[/android] permanent link

Wed, 28 Dec 2016

Oculus Rift

So having tried out the HTC Vive two weeks ago, I decided to go to Best Buy and give the Oculus Rift a try.

The Microsoft Store had more of an area set out for the Gear demo, the Rift area was smaller and not partitioned off. The demo guy worked for neither Best Buy nor Facebook/Oculus, but for a third party - but was more connected to Facebook/Oculus than Best Buy.

The setup had the Rift, a sensor, Rift headphones, and the new Oculus Touch controls. I put on the headset and then the touch controls. Like the Vive, you can see your virtual hands in front of you. You begin in something like a hotel lobby which is a waiting area of sorts. Then you're put on the edge of a skyscraper, and can look over the edge. I definitely had some visceral feeling of vertigo doing that. You're also put in a museum with a rampaging dinosaur, which looked real enough. You also get to watch two little towns operate. You also meet an alien. I believe this is the "dreamdeck" demo, although I don't recall seeing robots.

Then you're in an empty VR room and you learn how to use the touch controllers. Then you can choose what app to use - I chose Medium, a sculpting app. It's cool, you choose if you're left or right handed. Then the left hand does things like undo the last thing you did with your right hand. You get to sculpt a 3d tree. Your right hand keeps transforming from one tool to another - first you place your tree upright, then add branches, then add more bulk to the trunk, then sculpt down the branches, then add leaves etc. 3D sculpting. Your right hand transforms from one tool to another depending on the job.

In my experience, the Vive felt more like 3d, the Rift felt a little more like I was looking at two screens. Although as I adjusted the headset it felt less like that - I'm not sure if that was the Rift or me just not tightening it properly.

One nice thing about the Vive demo was some of it was a little less guided - I could move around and manipulate what I wanted. The Oculus demo was guided each step.

Still it was pretty awesome. This is just the first generation, they'll get better and cheaper as time goes on. VR is obviously already here for early adopters, with killer apps and cheaper and better hardware it will take over the video game market.

[/vr] permanent link

Thu, 15 Dec 2016

VR

So today I went down to the Roosevelt Field shopping mall. I saw Microsoft had a store there, and often I would just walk by, but I decided to see what they had there.

Among the various devices were boxes with Oculus Rifts and HTC Vives in them. They even had the HTC Vive set up for a demo. I have of course been hearing a lot about VR since the Oculus Rift Kickstarter kicked off in 2012 (actually I've been hearing about it before that even). I've never tried the Vive or Rift though. Actually the Rift's hand controllers, Oculus Touch, just came out last week, so I'm not all that late in this.

It was quite amazing. There's been a number of times in my life that I have seen a new piece of technology - a PC, a modem, a Unix box on the Internet, a web browser - and it was immediately obvious how impactful this technology would be on the world. VR in the Vive was one of those experiences. Seeing it you can foresee the massive changes this new piece of technology will engender.

One thing a lot of people who have seen this have said is you have to see it to understand. You can explain it to people - but people won't really have an understanding of it until they use it. Because it is so visceral. It definitely has the "presence" within the immersion that people talk about.

I didn't realize how interactive it was. I walked around, I was under the ocean looking at fish, whales and a sunken ship, I picked up objects, I picked up mallets and played Mary Had A Little Lamb on a xylophone in a wizard's workshop, which then played over the store's speakers.

When I took the headset off after a few minutes I experienced what some have discussed. It was slightly disorienting. My central nervous system said - how did you get from the bottom of the ocean to wandering around this mall so quickly? It's a signal of just how this relatively inexpensive and relatively portable system really has finally got immersion presence right.

[/vr] permanent link

Sat, 03 Dec 2016

My Wallpapers Android app, and the last 10%

So "version 2" of my Wallpapers app for Android went out on June 30th. I then looked to make improvements. I put the category name on top of the category page. I adjusted text size by the screen dpi display. I removed image margins for image details. I also did some UTF-8 fixes, as I wanted to start moving into the international, non-English exclusive market. These changes done I release "version 3" (technically 7) on July 23rd.

Continuing on non-English languages, I then did many more Android side and server side changes to handle other languages. I paid to have the app translated into Spanish, French and German. Many of the pictures already had multi-language blurbs and/or tags. I upgraded the gradle version. I fixed the button and button text size. I added more licenses for the pictures being used. I added functionality so categories could survive screen rotation like non-category grids. Google seemed to be sidelining Google Analytics and pushing Firebase, so I tore Analytics out and put some Firebase in. And then - release of "version 4" (8, technically) on September 16th.

I had done the foreign languages because it seemed as if it was time to expand beyond English-speaking countries. People used the app, rated it well, came back to it to check for new wallpapers. I thought I had done enough QA on it. So three language translations were paid for, and on push-out of "version 4", I began doing heavier paid promotion of the app.

Oops. More people meant more usage which meant more bugs exposed. I got a one rating on September 21st and another 1 rating on September 24th. I tried to figure out what was going wrong. It was hard to pin down specifically who the one ratings were in my web logs, but I had what looked like good candidates. Both had loaded the JSON pages but never displayed the images. I looked through the code and it looked like I could have made an error somewhere. I also saw from the logs others were loading two JSONs and that was it, but didn't give any app ratings (meaning they were probably unhappy but didn't know it). So I wound down the paid promotion somewhat while I looked for what was wrong.

While this was not the only thing I did for the past two months, it is what I was working on in the past two months with regards to the Android (client) side of this app.

One blind alley I went into was putting debugging code in to send a bugging message to the server. First I tried Firebase which didn't work well. Then I rolled my own.

What I should have done to speed things up is go over the code line by line and see if it made sense. I should have modified my dev server scripts to slow down responses, to a slowness rate I don't have during regular QA, but which people who have a slower connection in some countries have.

I had desired to architect the app nicely, but in just getting bit by bit working, it had become messy. What I had to understand was the Android classes, namely Activity and Fragment, and their modern usage in Android, for example AppCompatActivity over Activity. I do things the modern way. I need to understand Activities, Fragments, BroadcastReceivers, AsyncTasks and so on. I need to understand their lifecycle, how they respond to screen rotations. How they respond to people hitting the home button, back button, or UI buttons. How they communicate from element to element, say, Fragment to Activity, or vice versa. I also have to anticipate that some people will load JSON slower than me, and this sort of thing, which means if I do not design responsiveness and concurrency correctly, and do not QA with this in mind, I can miss it.

Any how, the app was rearchitected. Everything hangs off an Activity mostly. Not exactly object oriented ideal, but even the best Android programmers seem to complain about context god objects, fragments and so forth. I have a fairly light fragment hanging off the Activity. I also have a class which is the main data structure which hangs off the Activity. There are two instances of it, associated with the two fragments, but due to the quirks of Android it is easier to have them associate with the Activity than the fragments they are associated with. Off this data structure is an AsyncTask that hits the web API and expands the size of the data in the data structure. I have other things going on, but that's the main architecture - an Activity with three Fragment instances hanging off it, plus two data structure instances, each with AsyncTask sub-classes that grab more data.

In the investigation of who was having problems, I could not always tell who was who by device type or IP. So I started having devices send an instance ID to the web API. But it can take a few seconds for an instance ID to generate. So I do one JSON grab, have the image library (Android UIL) grab the images it shows, generate an ID and then grab the locations of more images. I want people to get a quick response before the delay of an instance ID creation, even if I have to match people up later on the server-side. The main thing is seeing who is having problems so I can fix it.

For screen rotation, I had a Parcelable data structure object in another Parcelable data structure at some point. This was too complex for the system, or me, so I just made the top instance Parcelable and pushed the information up to it.

Things have been moved around enough in this, enough times that it seems to work but looks a little sloppy. So I will clean it up and make it look nicer. It took two months to get to this point though, so if it's working I will be taking a little breather, I spent more time rewriting this than I had wanted to.

I started writing this app in early March. I wanted to write an app that was a pretty vanilla Android app with no fancy NDK/JNI stuff. I also wanted one that could be popular and possibly slightly lucrative. Also one which was not something people could knock out in a few weeks, but which would not take forever either. After all, I have to build apps of this type before I can handle the schedule estimating and size of yet more complex apps. One thing doing this app has taught me is a better idea of scheduling Android projects, or even programming projects in general. Also about the care I need to put into a complex program with each logical piece. Also to make accommodations for Android oddities.

I'm still looking over the code and looking out for user problems. Hopefully the main problems have gone away. The app has 21 5 star ratings, 9 4 star ratings, and 2 1 star ratings. The 1 stars both seemed to have the problem where the app wasn't working. 34% of downloaders are active devices. This is higher for some other apps, but this one is more of an entertainment one. It also has 13% user retention into month 2 in terms of active use. If I update wallpapers more frequently that will go up. So hopefully I fixed the problem that had come up.

[/android] permanent link

Sun, 03 Jul 2016

Popular

In my Wallpapers app for Android, I have a popular tab, to show the most popular wallpapers people are using. I generate it by going through my Apache web server log files, and pulling all downloaded wallpapers. I see which downloads are equal to or greater than the file size (i.e. not aborted before download). I also only count a download of a wallpaper from a particular IP only once. I also see the date of the download.

The date the wallpapers became available on the app varies. Some were available on the day the app launched. On the other hand, I just added several a few hours ago, so those have only available for a few hours.

This creates a problem when calculating for popularity. If I count popularity by all downloads, the older wallpapers will keep showing up on top, since they have the history there. Getting on top of the popular page increases the downloads of a wallpaper, so it becomes self-perpetuating too. On the other hand, if I count only downloads from the past day or two, there could be a statistical fluke where a normally popular wallpaper disappears down the list, while some usually unpopular one hits the top. It's best to count that older information in some way. But how?

I came up with a scoring scheme which I am happy with. It's based something off of the idea (but not the exact rate) in physics of weak force beta decay. I count how many days back a download was - if it was 10 days ago, then variable d is 10. Then I put it into the algorithm 1log10(d+1) . This is the score for that download. Which in this case is ≈ 0.96. Then I add all those scores up for that wallpaper and that is its popularity score.

Do you see anything wrong with this? What about wallpapers downloaded on the day this is calculated? Then d would equal 0, and log10(d+1) would equal 0, and we would try to be calculating 10, which is impossible. What I do in this case where there are same-day downloads is divide by 1.5. So we divide by (d+1.5) not (d+1) in that case.

That same-day exception is arbitrary, but all of this is arbitrary. I might futz with the algorithm more, but it has been working well so far. Old reliable popular wallpapers which stay popular show up on the top of the popular page, but so do newer wallpapers which were uploaded a few days back and have proved to be popular since then. It seems to have worked out well so far.

[/android] permanent link

Version 2

So I spent 2 1/2 months writing version 1 of an Android app. I wanted it to be a minimally viable product, as is the common parlance.

It was a minimally viable product. Actually, perhaps too minimal and perhaps not viable enough I worried. Its whole point was offering many wallpapers for Android devices, and I was worried that having only 335 wallpapers on launch was too minimal.

But that was a server-side content problem, and in the hours and days after the client launch, I added more and more wallpapers - a month later it now is 515 wallpapers, and I'm adding more every week.

So aside from the continual need to add new wallpapers, I wanted to work on features and such which I had pushed off to version 2.

At the top of the list was tabs. I was using LocalActivityManager to do tabs, which was deprecated in July 2011. I was reusing my old code which I probably wrote back in June or July 2011. It still worked though. I attempted to do something more modern in the 2 1/2 month version 1 MVP development cycle, but it was such a hassle I punted it to version 2. As it had been deprecated a long time, and I wanted something that looked good and modern and worked well, doing tabs in a modern, correct manner was top of the list for version 2 features.

I didn't expect it to take long - but it did. It took almost a month. Now, to be fair, I wasn't spending every day working exclusively on Android tabs. I was getting those 180 new wallpapers during this period. I was also doing things other than this project. Then the past few days I've been doing the other things on that list for version 2 - improving design, fixing bugs, adding features. But the bulk of the time was spent on tabs.

So if you go to the Android development training page for tabs it suggests putting a tab listener on the Action Bar, and then setting up ActionBar tabs. Unfortunately, these methods were deprecated in API 21. Unfortunately being the Android development tutorial was not updated, and is telling you to do things which are deprecated.

Also - that tutorial suggests FragmentPagerAdapter or FragmentStatePagerAdapter. FragmentPagerAdapter and FragmentStatePagerAdapter generate arcane tags for Fragments. If I want to get the tag for a fragment (after the device rotates, say) because I want to execute a method in it - I need something better. So I am using Mark Murphy's CWAC-Pager.

I also skipped the tabs on the ActionBar. I did a TabLayout with a pager though. Which needs fragments. So I rewrote the LocalActivityManager tab Activities to Fragments.

Then I wanted things to remain the same after rotation and - that was a pain. I did saveInstances in the Activity and Fragments, and check for that when rotation is done. So now it works. If the tutorial said what to do that was not deprecated, or at least said "don't try this, it's deprecated", it would have saved me time. It's done know any how.

So then I went through the list of things to do after that. With tabs done, I changed tab design and colors. I changed screen background colors, button colors, text sizes and colors etc. I put pictures of wallpapers on the categories list.

Android automated testing and user reported ANRs had shown some problems in the detail page, so I fixed those. Both were due to behavior I had not anticipated on the UI - the ImageLoader instance being killed off after long disuse was one. People clicking a detail screen, backing out and clicking another one quickly was another. My broadcastreceiver would see the old message, poll the new JSON queue and empty it. So I changed it to peek for a good JSON for the current detail wallpaper and if it didn't see anything to ignore the old broadcast, and wait for a new broadcast.

So version 2 (technically version 5 - I did three minor updates after the version 1 release) works good. It looks good - like one of the real wallpaper apps. Still some more way to go, but it's better. I did an alpha, had the automated testing check it and someone else checked it and gave me advice (thanks!) Then I did a staged rollout to 20% of users. I'll rollout to 100% soon. Then I'll continue adding new wallpapers and start working on version 3.

Although version 3 needs things done, they're lower priority than the stuff that version 1 and 2 needed. I also will be downloading some wallpapers every day. Some future features need these new wallpapers, as a wallpaper search makes more sense the more wallpapers you have.

Anyhow, with version 2 out, which is the stuff I wanted in version 1, but figured the app could launch without - now I have more time. Time to do another app. It took 3 1/2 months to get to this point, so I'm not in a rush to dive deep into something new this week. I'll take a few days (or weeks) to mess around with a few things. Measure twice, cut once - why jump into a 3.5+ month project without much thought? The stopwatch app was one I knew I could do real quick, and the wallpaper app seemed something I could do in 2-3 months, would enjoy more than alternatives, had a shot at being lucrative enough to compensate me for time spent etc. So now I'm on to the next thing, whatever that may be...

[/android] permanent link

Sun, 12 Jun 2016

Modern Android programming

What is modern Android programming exactly? It's hard to tell. It's hard to tell what to do. You go to developer.android.com and click develop and click Training right under that - but the training is out of date. It's been that way for the past five years, from the tutorial of five years ago, up to the tutorial there today.

So you go to developer.android.com, click Develop, then Training. There is a Getting Started section, a Building Your First App section, a Supporting Different Devices section, and a Managing the Activity Lifecycle section. Then there is a Building a Dynamic UI with Fragments section. That has a subsection called Communicating with Other Fragments, with a sub-section called Define an Interface. It has example code suggesting:

public class HeadlinesFragment extends ListFragment { /* ... removed code for brevity ... */ @Override public void onAttach(Activity activity) { super.onAttach(activity); /* ... removed code for brevity ... */ } }

So we put that in Android Studio and...a flag comes up. It is deprecated! We look at the reference API and see it was deprecated in API level 23.

So now what the hell do we do? Five sections into the training for beginners, and it's telling us not to do what the training tutorial said to do. Where do we go from here?


I stumbled on this, because I am trying to write a simple app with three tabs on top. However, how to go about getting this working is a completely convoluted mess. The training manual is deprecated (like this section), the sample code is deprecated, like many things in Android, it is near impossible to figure out how the modern Android programmer, one who doesn't want to use deprecated methods, should proceed. It is doable, but the Android core team doesn't seem very shy about breaking the API, or at least deprecating it. Often we're told to do something, such as setting up tabs, via a certain method. So we go to do it the way we were told, but learn that method was deprecated, another method is suggested. So we go to use that new method but find out that now THAT method is deprecated as well - and if we're told where to proceed from that point we're lucky.

[/android] permanent link

Sat, 21 May 2016

Releasing another Android app on Google Play

Summary: I started releasing Android apps in 2011, this is about my recent release of an app I wrote over the past 2 1/2 months. I walk through my whole process.


From mid to late February, I spent two weeks working on an Android app, a process which I write about here. With that done, I began casting around for my next project.

I wanted to write an Android app for which the time I would spent writing it would hopefully be financially remunerated. From previous experience, I knew writing an app for the mass market would be the most likely to yield this, as opposed to an app for some niche market. As I have not had much success with games on Android, but have had success with non-game applications, I decided not to do a game. I've also had success with ad-based apps so I decided to get revenues from ads.

These constraints satisfied, I now have a more limited number of possibilities to choose from. I could do a photo editor, a battery saver, a wallpaper app, a file manager and this sort of thing. So at this point I roughly estimated how long it would take me to write a minimally viable version 1 of each of these apps. Then I sorted this list by time. Then I ticked off some other factors, like probability of success in the timeline, how much competition would have to be dealt with (with say for instance, yet another flashlight app). Another factor is what appealed to me - what near the top of the list would be more fun to do, would I learn from etc.

Going over the list, a wallpapers app started to seem like the best choice. I could write version 1 in less than three months, it was for the mass market, I could probably compete enough with the existing players to get some of my time remunerated, it would be more enjoyable than the alternatives etc. It did have some drawbacks - if it were to be a success, I would need not just hundreds of wallpapers, or thousands, but tens of thousands, or hundreds of thousands or millions. But I could probably get away with only having a few hundred wallpapers for version 1. So I chose to do a wallpapers app.

March 3

I look at the competition. There are two apps with 10-50 million downloads, one with 50-100 million downloads, and one with 100-500 million downloads. So that looks good, I'll start doing OK if I go into the hundreds of thousands of downloads, if it goes into the millions I will start getting remunerated for real. The apps all differ from one another slightly, but most are similar in many ways.

I decide if one of these top apps lacked a feature and succeeded, my version 1 can lack that feature as well. So I don't need a search feature (with only a few hundred wallpapers, it seems a pointless feature any how). I don't need tags. The only tabs I need are recent, popular and categories. I can start with just 11 categories. I can skip a favorites feature. I don't need a share feature (although that would be desirable for increasing the app's popularity from both ends - the sharer and the person who gets the share and may learn of the app). I can skip some of the design and design animations. I can skip a related wallpapers feature. For picture information I will put license, attribution and perhaps some information on the picture, but can skip some of the other information for now. So a lot of this is just limiting scope so version 1 will be published in a reasonable amount of time. Some of these things can come later. I would prefer a high percentage of people who download the app to keep it and use it, and for its ratings to be high. So that reflects on the schedule as well.

Since I am doing the full stack and sourcing the images as well, I have to decide what order to do things in. As sourcing the images is the most outside my control (unless I make all my own wallpapers), I decide to start with that. The limitations I encounter there will guide the rest of the project. Also, if there are any unpleasant surprises I prefer to learn them early, and perhaps even drop the project quickly if they're too much, not having wasted much time on it. So I'll start with picture and picture information sourcing, then do the database, then make a REST API interface between the database and client, and then do the Android client.

March 6

So I start casting around for image sources. Wikimedia Commons seems a good first source. They have a lot of good images, the licenses are usually Creative Commons or public domain. Wikimedia Commons has featured pictures which help me pick pictures more quickly, featured pictures are also translated into many languages already, usually. They also have a decent API. I start formulating an idea of what the MySQL database schema will be (As this project's scope is limited, it will not entail a MySQL to MariaDB migration). I download a few pictures and note their picture information. I start writing a Python 3 script to parse the XML from the Wikimedia Commons API. Instead of hammering the API for the same XML over and over, I download it locally and work off the file.

March 7-10

Work on Python script.

March 11

So now I feed my Python script an image URL, or a source URL on Wikimedia commons, and it downloads the related image, as well as queries the Wikimedia commons API and gets the file name, title, size, uploader, license, description, and other information. It's pretty much pulling all I need to start with now.

March 12

So now I really start my database schema. I use MySQL workbench to help. I try to remember all that first normal form, second normal form etc. stuff. One thing I consider is things which are singular now, but might be plural in the future. What if a wallpaper can be gotten from multiple sources? What if a wallpaper has multiple licenses? I design with this in mind.

March 13-14

Still designing schema

March 15

Start writing functionality in Python to insert the information pulled from Wikimedia Commons API into the database created by the new database schema.

March 16-19

Keep adding database insertion functionality into Python script (take St. Patrick's Day off).

March 20

Finish putting in functionality. Start populating server database with image information and web proto-API with images.

I'm happy this is done, but 17 days in it seems we have not come that far.

March 22

I put a JSON on the web server pointing to the image files. I start working on the Android app. I had already decided to use the Android Universal Image Loader (UIL) library as I am familiar with it. I load the JSON, load the image URLs, then load them into a Gridview with UIL.

March 23

Start selecting images. Get pictures of food, animals, flowers etc. Now loading on the Android device. I notice Wikimedia Commons is good for many things, but is lacking in some areas. It is good for real pictures, but not so much images of inspirational quotes, artwork, photographs with heavy filters overload on them for artistic purposes, and this sort of thing.

March 24

I start looking for another image source, to fill in for what Wikimedia lacks. Deviant Art seems a good choice. They have a good API, and many of their pictures have amenable licenses which I can use. They also fill out many of Wikimedia Commons gaps - images with inspirational sayings, artwork, filtered photographs, themed photographs (flowers in a heart shape and that sort of thing). So I start working on a deviantart script. This also reflects on the database schema - ultimately the database will have various image sources, so adding a second source hardens up the database. For example, Wikimedia commons gives a sha1 hash for its images, DeviantArt does not, so I will either have to do a sha1 for each new image, or drop that column from the database schema.

This is getting long, so I'll be more brief for the middle section of the project

March 26

Add a details JSON for each wallpaper

March 30

Add functionality so that people can download and set wallpapers on Android

April 3

Download wallpapers from Deviantart

April 4

Make Android icon for app

April 6

Work on JSON for details page

April 7

Work on Android details activity. Work on picture grid design details.

April 8

Wikimedia commons uses a lot of HTML for details - so put clickable links in Android for them

April 9

Get more wallpapers. Increase database size for various columns.

April 10

Add tabs

April 11

Select which categories to do. Start downloading sports pictures (one of the categories for v. 1)

April 14

Wallpaper image on details page can be smaller than final downloaded wallpaper. The images in the image grid can be yet smaller. So write Python scripts (using Python Imaging Library) to shrink the original images down to a detail thumbnail and an even smaller grid thumbnail.

April 18

Add code to check for network connectivity problems and deal with them accordingly

April 19-25

Download content for first categories. Cats, dogs, cities, outer space etc.

Also, from April 19th to the end of April, I don't do much programming for the app, as I am busy with other things, including sending Android-related patches to XScreenSaver for its 5.35 release.

April 30

OK, with all this content, now the initial JSON is starting to get pretty big. Even though this is version 1, I will have to deal with this sooner or later and dealing with it now will cause less headaches later. So I start splitting the JSON up.

It makes things much more complex, but it will inevitably be this way any how if the app is a success. It's complex due to mutual exclusion - UI events etc. can be happening between the request and processing of new JSON image URLs. For the next 19 days I will alternate between dealing with this, and everything else that needs to be done.

May 14

So I have been implementing the splitting up of JSON primarily since April 30th. My code was refactored a lot between April 30th and May 3rd to deal with this. By March 14th I have the components for split JSON, but a lot of crud has accumulated and the logic is a little off. Much of the crud is due to the splitting of JSON, but that is not just it - there is also duplication of code and unneeded complexity. It would be quicker to just start from a fresh Android project, and string together the various components of what has been written so far.

To prepare for that I clean things up. I move any string in the code to strings.xml. I add local Android information for various licenses in the server database. I add language to the web REST API. I modify the UIL to deal with out of memory errors on older devices. Then I start rewriting the app from the ground up.

May 15

I put an onScroll listener on the image grid, and use the end of the scroll as a trigger to load more JSON

May 16

I get rid of code duplication among the category and recent/popular activities. I combine the common code, and subclass the unique functions to different classes.

UIL has an annoying flicker when the data set changes, so I change the code to not reload on that signal. I start the app by downloading a small JSON, and when that's processed, do two things asynchronously - load those images, and fire off another JSON to have the information to load the next 48 images off the screen if we scroll down.

May 19

We're headed into the home stretch. I register a domain name for the app. I add Google Analytics (too much analysis it complains - so I cut down on the number of messages I send). I fix up the design some. I publish the app to alpha testing on Google Play.

While there is something to be said of a waterfall method of programming and releasing a polished jewel, the reality is that my income or capital or what have you is not unlimited. Also, I would like to start seeing what the market response will be. So I prepare for release.

May 20

Some minor tweaks and - release! Yay! I post to my Facebook and Twitter pages and can see from the server and Google Analytics that I am getting some downloads.

In a few hours I see that I released with a bug in the code. In Android Marshmallow (6.0), permissions changed, which ultimately renders the app unable to set wallpapers on Marshmallow devices. I had QA'd the app on a 6.0 AVD/emulator, but not the set wallpaper step. I already dealt with this problem on another app, so I code up a fix and release version 1.1 of the app.

May 21

I set up some ad campaigns for the app. I don't want to do a big promo right off the bat, but to drive in a trickle of interested users. Also it takes a little bit to get ads setup and approved and tuned right.

Then I write this up.

I have plans for future versions. One of the first is to download more wallpapers, so I am already on that. I also have other ideas which I punted on for the first version. You can download it now.

So I am still downloading new images and coding up various improvements for the next version. Also some other things I put aside I will get back to working on. Nonetheless, all that considered, I should start thinking of my next app. I want to put out a few apps that have potential, and then hopefully one will take off somewhat, and then I can put more wood behind that arrow. So soon I'll start thinking about what my next app will be.

Tools used:

Database schema design: MySQL workbench
Android programming: Android Studio - code is Java language with Android-specific classes and quirks. I run the Android Studio IDE locally on my Ubuntu machine.
Android 3rd party libraries: Android Universal Image Loader
REST API programming: Python 3 on an Apache web server, hooked to a MySQL backend. Running on a Debian Linux VPS at Linode. I use vi to edit the Python code on the server.

[/android] permanent link

Tue, 01 Mar 2016

Releasing an Android app on Google Play

Summary: I started releasing Android apps in 2011, this is about my recent release of a simple app I wrote in two weeks which I have hopes of commensurate (or better) financial reward from. I walk through my whole process.


I released my first Android app on Google Play back in 2011. Since then I have released (and sometimes unreleased) a number of apps.

For the past few months I have been working on a yet-to-be-released spreadsheet, which will take a while to write. While it has made some progress, and I have had fairly realistic expectations of how long it would take to write, I miss the ebb and flow of a more agile release-update-release cycle. So I spun an app out of the framework written thus far. But that didn't really do it for me.

Since I'm in the midst of a long project, I don't want to get enmeshed in another long project. I wanted to write an app in a short time frame, which might potentially make me some money and be useful to people. So the two parts of this is it would be an app of a type that is popular, but which I could write quickly. Of course, this means other people can write it quickly as well, and since it is of a popular type, there will be a lot of competition. This is OK though - there are problems of some type no matter what I do. Another reason potential competition is OK - this app will be finished quickly, so even if it is a total waste of time, it is not much of a waste. So I wrote the app. It is not going to be a total waste of time in any matter, because even if it has no commercial success, I learned some things while doing it, which I can bring to other apps.

February 14
So the idea of doing this in general I had been mulling for a few days. On February 14th, I decided to do a Stopwatch. It is something I could do quickly enough, and lots of people want one. Since it's popular and easy to write, of course there is a lot of competition. But I can knock one off quick so even if it's a waste of time, I'm not wasting much time. I started off looking at what stopwatch apps were popular on Google Play, what their quality rankings are, how many downloads they had, when they were first released, when they were last updated, what their features were, how they monetized, and that sort of thing. I also read what people said about the apps in their Play comments section, both pros and cons.

February 16
Then on February 16th I went on F-droid to see what FLOSS Android stopwatch apps have been released. I looked at two, one with an Apache v2 license, one with the Perl Artistic License. I took a look at how they ran, and then how they laid out their classes etc.

I saw that apps with stopwatches often include not just a stopwatch (time starts at 0 and increases) but a countdown timer (time starts at a point and decreases to 0) as well. However, my app is going to be a minimal viable product that I want to do quickly. So I decided to do it in the Unix spirit of an app that does one thing and does it well. I can always tack on a countdown timer later.

I saw that some of the apps had notifications and lockscreen features, which I hadn't thought of. Actually, this app is scratch my own itch of a sort, since last summer I went jogging using someone else's app, and was not happy with the result. I wanted a stopwatch with laps that could survive a long jog. So I resolved to put this in the way I wanted - a resilient stopwatch.

Some of the apps had hundredths and thousandths of the seconds displayed, but it goes by so quickly on the display that it's pointless. Although one app managed to display hundredths of a second decently. I display only tenths of a second on the clock, but put hundredths of a second on lap times. If people really want milliseconds I'll put that - I just don't want too much stuff filling the UI.

So with this in mind, I begin programming. I'm using the stable version of Android Studio on a System76 laptop running Ubuntu 15.10.

One thing I want right away is as big a clock as possible. I decide to start with a TextView. I want it to fill the width of the screen. I'm not exactly sure how to fill the width of the screen with a TextView, and don't find a satisfactory solution until February 23rd.

Most of the other Stopwatch apps have two buttons, and I use two buttons as well.

I also look for a nice icon to use for the app. There is an icon available from https://github.com/alecive/FlatWoken . The license is CC BY-SA 4.0. They say "the iconset is free to use, including commercially, but please consider that if you do convey any monetary income from its use I kindly ask that we arrange for a fair compensation." I've paid for app icons before, and if this app winds up in the black, I will send them money commensurate with what I paid those who required I paid for commercial use. This icon is easy to understand, looks nice, and is available in everything from 512px to 16px sizes. Perfect for my needs.

I then give some preliminary consideration to what my app blurb on Google Play might say for the English language listing.

February 17
I want the stopwatch to be resilient, so I create a Service to run it. On Android, Services are sort of like Unix background processes. I set the service to be destroyed (Activity onDestroy method) if the stopwatch is not running, and if it has no state (no stopped or lap information, whether it was never started or reset). All of the boilerplate to connect an Activity and Service - ServiceConnection, Binder and so forth - I had in my inchoate Spreadsheet app code, so I just copied it over. I also do a little work on the two buttons.

February 18
I add a Constants class which shares final constants between the Service and Activity classes. I begin implementing some of the stop and start functionality in the Service, as well as on the button which handles that.

February 19
So, this is a stopwatch, which means I need a timer running. The Android Timer class docs say ScheduleThreadPoolExecutor is the preferred way to do this, not Timer. But what are the parameters etc. of ScheduleThreadPoolExecutor? I read the class docs, and also look at Google's Android sample code to see which apps use this. I get an idea of how it works, experiment a little, and put it into the Service. With every tick, I send out a broadcast, which is received by the activity, which updates the clock.

February 20
I make the display of the current time more resilient, lasting through pushes of the home screen button, back button, screen rotation and so forth. I also add turn on reset functionality if that should be called.

February 22
Busy through the weekend, I get back to work on Monday. I write a satisfactory method of fitting the TextView text size to the maximum allowed. While the app is running, I run a test - I create an off-screen test staticlayout and textpaint on it, and keep increasing its text size until the text won't fully fit on the staticlayout's width any more. Then I use my last good test size on the real TextView. I can probably improve this algorithm but it works for now.

I also put some very initial lap functionality in.

February 23
I add more lap functionality. I save lap info through screen rotations and other Activity changed.

February 24
I show my friend, who regularly exercises, what I have so far. He likes the app's simplicity. He also would prefer hundredth of a second and even millisecond accuracy. I have millisecond accuracy but don't display it yet, or even hundredth second accuracy.

I set it so that the last added lap always appears on-screen, and older laps begin fading off-screen.

I want the lap ListView views to be more flexible, so I create a custom ArrayAdapter for the ListView, which generates the views I want.

I stick in an ad that links to one of my other apps when closing. This is the first attempt towards monetization.

At this point, we do have a minimally viable product I think. The app is now minimally useful. But there's a few more things I want to try before release, so I will take a stab at those.

February 25
I add a notification widget to the app. The notification widget exists in the lockscreen as well. It is updated every second with the new time. I pull one of Google's CC-BY licensed clock icons from the web to use in the notification.

February 26
I begin playing around with a behavior where clicking on the notification goes to the app activity. While testing this out, I notice an unrelated problem. The app had been exiting in some circumstances even with state. The notification testing I have been doing flushed this unrelated error out. So I fix that bug.

February 27
Pressing notification now goes to the app. I could put action buttons here, but will postpone that beyond this minimally viable product.

I change from AppCompatActivity to Activity and change the style to Holo. It makes things easier...

I decreased the size of lap numbers and increase the size of lap/total time spent. I also add hundredths of a second accuracy to these lap times.

I add lap time sharing. So people can e-mail their lap times, send it to a notepad, or what have you.

I also put in Google Analytics. I tried it a while ago on an app and not much happened. I try again. Hey, it works OK! Either they made it simpler or I finally figured it out.

February 28
I QA the app on a variety of phones and tablets, and some AVD emulators. Then I push it to Google Play alpha testing. I send screenshots to Google Play, fill out the blurb there etc. I look at the Google Analytics console - wow, this really works. Wasn't sure what to track, so I am doing button presses and that sort of thing. There are a few things I'd like to know, including what frustrations people may be having and that sort of thing.

February 29
I push from Google Play alpha to production. Then I set up an Adwords campaign. My ad is approved! I run a few ads, many see the ads, a few click, and a percentage of those install the app and show up as a blip on Google Analytics - currently 4 people. Let's see - none of them are using laps. They tend to check the app out for a few seconds and then leave. Although some use it for longer.

So I will probably run Adwords ads for this, see what feedback is, see if it crashes and so forth. Then I may pay to translate it into other languages, once I get a sense English language users are happy.

Many of these apps also have countdown timers. If it is heavily requested and seems necessary I may add one. Doing this was also about agile, pulling the trigger, minimally viable product etc. so I passed on that feature for now as this has most of the desired stopwatch features.

As Analytics is working and I'm tracking ad conversions more closely, some time down the road I may run some Facebook ads for this. Or run ads wherever - I have some grasp of Analytics now and want to explore the different promotion avenues. If Google Analytics doesn't have what I want I can even roll my own. But I will use their backend for now in terms of conversion tracking.

So I'll continue pushing this forward and see how that goes. I'll probably do another app of this manner. I don't want to do something like the many months long spreadsheet app I am in the midst of. Two weeks for this was good. I could even do a longer one. Although not that long - releasing this app was just a break from my spreadsheet app, which I have been working on for months. Years really, although I put the project aside for many years and picked it back up last year (2015). I don't need another long project like that, just some quick apps which might help people and may make a little money like this Stopwatch app.

[/android] permanent link

Sat, 21 Mar 2015

Preparing for Artificial Intelligence and Machine Learning

I took a class in AI in late 2013, but I only started looking at practical engineering implementions for ML in the past few months.

In looking at things like scikit-learn, I saw that a lot of the algorithms are already coded. You can even automatically test what classifier/model will be best for the data. In looking at the package and examples, I suspected that the hard part was wrangling the in the field data into an acceptable form for the algorithms.

I was graciously invited to an event a few months ago by a fellow named Scott, at which there were several people with good field knowledge of AI and ML. I talked to two of them about algorithms and data. Both of them made the point that getting the data wrangled into a suitable form was the hard part. I then went onto the net and read about this more carefully, and others with experience seemed to agree. So it is like other programming, where getting the data structures and data input right is usually the hard part, since if that is done well, implementing the algorithms is usually not much of a chore.

So I began working on my ML project. What does it do? Sometimes I go to local supermarkets, and what I am looking for is out of stock. So this ML predicts whether the supermarket will have the item I'm looking for in stock.

I architected the data structures (which consists of purchases, and observations that certain products are missing) and programmed the inputs. Then I added Google Maps so I could see where the local supermarkets were. The program would prefer close supermarkets to far ones.

Now I have run into a problem/non-problem. In architecting the solution so that the ML models and algorithms could better understand the problem, I architected a solution so that I could better understand the problem as well. Before I would pretty much go to my closest supermarket, if they were out of stock then on to the next closest one, and so forth. Now I have all that data available on my Android, including a map, and deciding which supermarket to go to is trivial. I don't need the ML so much any more. I wonder how often this happens - you build a solution so that AI/ML can be used, but once all the data is recorded in an understandable way, you don't need the AI/ML any more. Although there can be situations where there is a lot of data for someone to remember in their head, but not a lot for an ML solution.

Any how, I went through enough trouble to put all of this together, that I will still go through with writing a program that predicts if the items I want are in stock. I'll also make a map with time/distance information between my home and the supermarkets, and the supermarkets with each other. Then my program will give me advice on which supermarkets to try first.

[/ai] permanent link

Wed, 26 Feb 2014

Porting OpenGL in C and C++ to Android

I took a computer graphics class for the winter 2013 semester, in which I learned how to program in C++ with the OpenGL (and GLU, and GLUT) library. The most fun part, which I unfortunately did not have enough time for, was my final project in which I could draw pretty much whatever I wanted.

After finals were over, and Christmas came and went, I began diving into OpenGL for Android. Android does not use OpenGL per se, it uses the OpenGL ES library.

When I was porting open source apps that uses the Simple Directmedia Library, some of them had had OpenGL hooks and I had skipped them for porting to Android.

Initially I kind of dove in at too high a level. While the Android example apps used OpenGL ES 2 and so forth, most of the code I was looking at was more geared toward OpenGL ES 1 if anything. So I rewrote Android's hello-gl2 app to target OpenGL ES 1, not OpenGL ES 2. I also made sure it had the C++ values exported properly.

I decided to revisit those open source SDL apps with OpenGL that I had passed over previously. The first I looked at was Pipe Walker. It had a minimal number of OpenGL calls, and I ported it without much of a problem.

One thing I did was install the OpenGL ES library on my Linux desktop, and then target my desktop for the program, but pointing to the OpenGL ES library, not OpenGL. Once I got that working, porting it to Android was less of a hassle.

Then I looked at Jigzo, an open source jigsaw puzzle app that used SDL. It had a few more OpenGL calls, but was still fairly simple. So I ported that over. Again, I rewrote the desktop app to use the OpenGL ES library on my desktop, then I ported it to Android.

I then noticed the app Anagramarama which used SDL. It didn't have OpenGL calls, but I just noticed it while looking through open source SDL apps. So I ported that to Android as well. It's really designed for a standard monitor, so I made it tablet only - it does not work with phones well in its current form.

Pipe Walker and Jigzo used minimal OpenGL calls, so hand-porting it to Android was easy enough. But as I looked at apps with more code, hand-porting all the OpenGL stuff looked like more work. So I began looking how to automate this.

One solution was regal. It's Github page says regal is "a user-space OpenGL layer for OpenGL 2.x, 3.x, 4.x, Core contexts and ES 2.0. Regal implements OpenGL loading, emulation for ES and Core contexts and tools for debugging". Cool! I grabbed it and compiled the dreamtorus example app right on my Android. Excellent.

Then I looked at the size of libdreamtorus.so. About 20 megs! To figure out the total of what my Android app would be I would have to take that and then add on the size of the rest of the Android app. A 20 meg dynamically linked shared object library is not big for an average desktop or server, but it is for an app on a smartphone.

Pipe Walker hand ported myself had come out to less than 3 megs all told. Jigzo even with its Jigsaw puzzles was less than 6 megs in total. Yet just the regal library itself would be 20 megs on my device, never mind the rest of the app.

If I wanted to continue with regal I'd probably want to work on trimming that library size down. I don't think regal had much OpenGL 1 support either. I decided to look for other options.

Jamie Zawinski of Netscape fame had ported his XScreensaver app over to iOS, and had faced the rigamarole of all that OpenGL to OpenGL ES porting. Amazingly (to me), he was able to automate doing this within three days. Pretty much all of this work is done in a compatibility shim consisting of a file of C code and two header files.

As the file was within XScreensaver, I thought Xscreensaver would make a good first app to port with this method. But XScreensaver has a lot of libraries, a lot of dependencies, a lot of header files in code which themselves include other header files. I like to work on simple things when getting familiar with something, and then work my way up to the more complex stuff.

Like Jigzo and Anagramarama, I tried to find the simplest Xscreensaver to compile on my desktop, and rip it out to its own package, with as simple a Makefile as possible and as few dependencies as possible. The step after this would be to point to the OpenGL ES library, and then do the Android port. But it was slightly difficult to do, the XScreensaver apps had a lot of dependencies.

So I took a look at other screensaver packages. The Really Slick Screensaver package (rss-glx) contains the official Really Slick screensavers as well as some additional open source screensavers. They were much more easy to make simple standalone applications and Makefiles. The Sundancer one was simple enough that I hand-ported it from OpenGL to OpenGL ES, not even using jwz's GL -> GLES code. Once that was done, I worked on porting it to Android.

It was a little difficult, I never did a wallpaper on Android before, never mind a live wallpaper. I found some code that pointed to an EGLSurface as opposed to a Canvas for live wallpapers. Then I hooked that into the code I wrote which could do OpenGL ES 1 rendering on the native (C/C++) side of JNI. A little more banging on it and it worked as a live wallpaper. I tried some of the other rss-glx wallpapers but there were various problems. Then I went to work on the Hufo's Tunnel screensaver. It had a few more OpenGL calls than Sundancer, in a more complex manner, so I pulled in jwz's GL -> GLES code. It worked.

I wanted to have multiple screensavers in one app so I worked on it so that both would be in one app. I also wanted users to be able to send some of the flags that you could in the package. I put this in the wallpaper settings. The tunnel could be made smoother or coarser. The dancing sun could have sunbeams increased or decreased. Then I wanted to make sure the screensavers wouldn't interfere with each other or zombie instances of themselves - something I still have probably not totally fixed yet. I want to reduce state as much as possible, especially global, long-lasting state.

Sending command line flags to Android is a little different due to how the application lifecycle works. A threaded getopt routine is preferable. Luckily one exists, optlist, so I replaced getopt with it. Worked great.

The tunnel app uses the bzip2 library so I included that as well.

So I released the app with two live wallpapers - Sundancer and Hufo's tunnel. It had been unstable, but then I removed callbacks from the Runnable command when destroying the surface, and it became more stable. I QA'd it some and it was good. I published it. Hopefully the code is stable enough. I think it should be if someone is not purposefully trying to break it - hopefully.

[/android] permanent link

Mon, 26 Aug 2013

Developing Android in Emacs

It's been weeks, maybe months, since I fired up Eclipse (or Android Studio) to do any Android programming. I do everything with emacs, vi, ant, astyle, adb, and the Android commands "android" and "monitor".

The last thing really pulling me into Eclipse was the Control-Shift-O that automatically pulled in imports for Android. I have begun solving that in Emacs. Now I have this in my Emacs init file:

(add-to-list 'load-path "~/.emacs.d/jdee-2.4.1/lisp")
(setq jde-global-classpath '("/usr/local/android-sdks/platforms/android-18/android.jar"))
(load "jde")

I downloaded the Java Development Environment for Emacs, and point to the android jar for a classpath. Now if I want to automatically import a class such as TextView, I put my cursor over a TextView and type Control-C-V-Z. I confirm TextView, and then the class is auto-imported.

Sometimes I am given several choices and I choose the most appropriate one. Unlike the Eclipse one, I get more false choices and some of them are non-Android and bogus. I can work on this, although it has not been a big problem.

Sometimes I run ant debug in one window, see what classes need importing, then do the Control-C-V-Z for various classes in the other windows.

I'm sure I can automate this more but it works for me now.

[/android] permanent link

Wed, 21 Aug 2013

Bitcoins are worthless

A few months ago, as more attention began being paid to Bitcoin, I began looking into it. I discovered that Bitcoin currency was not backed by anything. It had no value other than its purported value as a currency.

This means it is worthless. Currencies only have value if they have some inherent value. It is why gold has been used as a currency for so many millenia. Two years before I was born, US federal reserve notes (dollars) were still backed by gold. Gold is used as a currency, it is used for jewelery, but it has other uses as well, in industrial settings and such.

Bitcoin has no use other than as a "currency". Which means it is not even a currency. Because all real currencies have an underlying value, like gold has.

But isn't Bitcoin valuable now? Can't you get exchange them for dollars, euros etc.? Doesn't that make them not worthless?

This is true, but it is temporary. They are inherently worthless though. People who get into pyramid schemes early might actually come out ahead if they cash out early. There is no inherent worth though. Gold and silver were valuable 2000 years ago and are still valuable. If I know someone with one hundred losing lottery tickets from last week - last week those tickets were considered worth one hundred dollars, today their value is nothing. If I bought five tickets for carnival rides last week, and the carnival has packed up and moved on, my tickets are now worthless. Bitcoin is the same.

In March 1637, there was a craze for tulip bulbs in Holland. A tulip blub could sell for 3000 florins or more, which would be about 30,840 euros ($41250) today. A speculative bubble built up, which then collapsed, and the price of tulips plunged back toward its more normal price. So temporal situations like this can exist - tulips were sold for tens of thousands of dollars in one country, in the space of one month, but this could not last. Tulips do not have that much of a value.

What about currency like US dollars?

As I said, US currency was backed by gold until two years before I was born. So obviously, people felt this was necessary.

Get 14 hundred dollar bills which are worn and torn somewhat in a manner which would make them less valuable for collectors, but would still be accepted as currency. Does anyone think those bills will be worth more than an ounce of gold five centuries for now? Or with inflation, even fifty years from now?

Currencies are only as valuable as their inherent worth. Only an institution as powerful as a government can get away with creating a fiat currency that would be considered to have worth. Not that that always works - consider countries with runaway inflation they can't control. Consider the confederate dollar, or the Reichsmark. The confederate dollar and Reichsmark had value for a time, but when their governments were defeated, their currency became worthless. Even before the Reichsmark, Germany's pre-Nazi Papiermark was becoming worthless over time. Governments love to be able to print money and say it has value, but they only have so much power in this regard, and it only lasts so long. England thought it had the power to keep the value of the pound at a certain level. George Soros is said to have made over one billion pounds proving them wrong. The power of governments to print money is limited and temporal.

Of course, the US federal reserve says of itself that it a "unique structure that is both public and private" which is "independent within the government" but not "indepedent of government". This aside, US dollars do have value. You can pay taxes and other fees with it. You can go to the post office and ship packages with it, and buy boxes and envelopes while you are there. Soldiers can go to their local PX or commissary and buy all sorts of things with dollars. None of this has the long-term useful value of gold though.

What do Bitcoin partisans say?

I find this interesting. The most clear answer I've gotten was from a wiki called "Bitcoin myths". Here are the relevant parts from that web page, with the argument against Bitcoin, which they consider false, and then their reply:

Bitcoins are worthless because they aren't backed by anything

One could argue that gold isn't backed by anything either. Bitcoins have properties resulting from the system's design that allows them to be subjectively valued by individuals. This valuation is demonstrated when individuals freely exchange for or with bitcoins. Please refer to the Subjective Theory of Value.

See also: the "Bitcoin is backed by processing power" myth.

The value of bitcoins are based on how much electricity and computing power it takes to mine them

This statement is an attempt to apply to Bitcoin the labor theory of value, which is generally accepted as false. Just because something takes X resources to create does not mean that the resulting product will be worth X. It can be worth more, or less, depending on the utility thereof to its users.

In fact the causality is the reverse of that (this applies to the labor theory of value in general). The cost to mine bitcoins is based on how much they are worth. If bitcoins go up in value, more people will mine (because mining is profitable), thus difficulty will go up, thus the cost of mining will go up. The inverse happens if bitcoins go down in value. These effects balance out to cause mining to always cost an amount proportional to the value of bitcoins it produces.

So basically they're saying Bitcoin is valuable due to the subjective theory of labor.

Then they go on to try to show how the labor theory of value is a fallacy - and fail.

Whether you agree with them or not, there have been serious arguments against the labor theory of value by people like Eugen Böhm von Bawerk. The Bitcoin myths FAQ does not delve into this, they create a false strawman of what they think the labor theory of value is and then proceed to knock it down: "Just because something takes X resources to create does not mean that the resulting product will be worth X." Yes, if that was what the labor theory of value was, that would be a good argument against it. But Adam Smith, David Ricardo and company did not construct a theory of value that could be knocked down with one sentence after a few seconds of thought. Especially Ricardo, who wrote volumes on his theories of value.

The Bitcoin FAQ seems like it was written by someone recapitulating arguments that they heard a more educated person make. I can't imagine someone having the basic realization that these theories of values came into play with Bitcoin, and then spelling out these ideas in such a sloppy manner. I'm more concerned with that original argument than the mistakes of what was probably a sloppy transcriber.

To say that Bitcoin gets value because the subjective theory of value is true is not a falsifiable argument. Because if people realize it is worthless, those people can then just go on and say it had no subjective value. It's not really a falsifiable argument.

On the other hand, if Bitcoins continues to be exchangeable for goods with real value over the long term - that would disprove the labor theory of value. Like Wile E. Coyote walking off the edge of a cliff, things like tulip bulbs, Bernie Madoff's pyramid scheme, Reichsmarks and so forth can seem to have a great value for some period of time - but eventually people wake up and realize it is worthless.

I guess one thing that can be done with this realization is I could make money shorting Bitcoins. Of course if I was personally liable, Bitcoin could shoot from $122 USD to $1000 before crashing to near $0, but that last part wouldn't matter as I would've been wiped out by the margin call on the way to $1000. So I would have to use an LLC or something. Even then, if it went $122 to $1000 to $0, I could still lose my money. So the bet would have to be low enough that I could cover any realistic margin call. Also, as Keynes said, "Markets can remain irrational a lot longer than you and I can remain solvent." But I think I can afford shorting one Bitcoin, just for the principle of it.

To put my money where my mouth is, I think I'm going to look into shorting one bitcoin using Bitfinex or someplace like that. Of course, the reliability and trustworthiness of Bitfinex comes into play. As do regulations, legalities etc. It would only be $122 though.

[/bitcoin] permanent link

Tue, 25 Jun 2013

IRS was targeting "open source" groups

Something popped out at me while reading a news article about the IRS yesterday. According to a New York Times article:

The acting I.R.S. commissioner, Daniel I. Werfel, formally ordered an end to all such "lookout" lists on Monday when he issued an assessment of the controversy that has led to harsh criticism of the nation's tax collector.
[...]
But groups with no political inclinations were also examined. "Open source software" organizations seeking nonprofit status "are usually for-profit business or for-profit support technicians of the software," a lookout list warns. "If you see a case, elevate it to your manager."

So the news has been saying the IRS was specifically targeting "Tea party" groups. Then we learned recently that it has been targeting "Occupy" groups. Now we find out it has been targeting open source as well.

I submitted this to Hacker News, but someone took the title of the article I made and turned it into something irrelevant to open source.

[/freesoftware] permanent link

Wed, 22 May 2013

Gnome terminal resize info on Ubuntu 13.04 - raring

Ubuntu disabled the resize info tooltip for Gnome Terminal once again, and has once again changed their convulted method to restore it.

In this iteration of Ubuntu:

1) "sudo aptitude install compiz-plugins"
(or "sudo apt-get install compiz-plugins" if you don't have any aptitude)

2) "sudo aptitude install ccsm"

The run "ccsm". In the filter search, search for "Resize Info". The box is unchecked with its default tooltip turned off. Check the box. Compiz will then freeze up a little for a few seconds and then go back to normal. You now have Gnome terminal resize info tooltip enabled.

[/linux/ubuntu] permanent link

Wed, 17 Apr 2013

VPSs, Nagios, .com domains

My revenues have been between $900 and $1425 over the past four months, so in January I decided to splurge and get VPS instances from two providers.

I read online about what people thought. A lot of people liked Linode so I went with them. For $20 a month I get 2 TB outbound transfer, 24 gigs of storage, a priority CPU and a share of eight others, and 1 gig of RAM. In January that was 512MB of RAM and 200 gigs of transfer, but there has been competition in the VPS space.

Rackspace seemed popular as well. People were less enthused, but it was deemed OK. So I got a VPS with them. With the lowest price "cloud server" you get 20 gigs disk, 1 virtual CPU, and 512MB RAM. Pricing is $16.06 a month but does not include traffic. With 32-33 gigs going out it is $20 a month. I send out less than 1 gig a month so I am charged around $16.18. Of course, these policies determine how I use the servers. I served 33 gigs of data from Linode in March.

I'm running Debian 6.0 on both servers. I run Debian because - what else am I going to run? I've worked with Debian since Vincent Yesue introduced Debian to me back in the mid 1990s. I'm familiar with it. I run Ubuntu on my desktop so I'm familiar with dpkg. I could run Fedora or CentOS (can't afford Red Hat at this stage) but Debian seemed fine enough.

I decided to set up a Nagios instance on my desktop and watch Dreamhost, Bluehost, Rackspace and Linode. I knew how flaky Dreamhost was, now I really know. Any how, I've been slowly shifting everything to the VPSs.

I run BIND 8 on both VPSs for primary and secondary DNS. I also run Apache on both VPSs. Rackspace is the front end web site. Linode I use for serving epub files, and also to handle search queries. So I run MySQL on Linode as well.

Last week, Nagios said Linode was slow. So I began culling down memory usage on Apache, BIND and MySQL. Nagios still said it was slow. So I began timing web page gets from other locations, and Linode was fine. The connection from my ISP to Linode was just slow for a few hours. It's probably better I tuned it any how.

I had some domain name ideas while doing this, so I signed up with Namecheap and got some domain names. I will probably be holding most of my domain names there hence forth. The number of dot com names registered are in the hundreds of millions. It keeps going up. I remember back in 1996 when names like proof.com were still unregistered, I missed snapping that up by a few days. Someone just e-mailed me offering to sell me a domain name for $350,000.

So I saw some a domain I wanted expiring. I used snapnames.com to scoop it up. And I got it. So now I have bookmarkflood.com. Most of the domains I have are either connected to books or bookmarks.

I want to improve my programming knowledge, more specifically Java, more specifically Android. But programming in general as well. Besides, Android is not all about Java - a lot of what I've been doing with Android has been C and C++ apps using the NDK. Or server side programs - usually Perl so far.

I've been reading Structure and Interpretation of Computer Programs. I have been taking my time to go through it. Right now I am on section 1.2.3.

[/vendors] permanent link

Tue, 16 Apr 2013

Mobile rising, Windows falling

Alexa lists Wikipedia as the 6th most popular web site in the world.

One nice thing about Wikipedia is Wikimedia data analyst Erik Zachte gives a detailed public summary of Wikipedia's web traffic. We have been hearing about the rise of mobile technologies like iOS and Android, and the problems Windows has been having, and that is well illustrated on Wikipedia. Windows browser share was at 55.73% last month, down from 89.5% four years ago.

[/android] permanent link

Thu, 03 Jan 2013

2012 in review

Well, I have had some small success with Android this year. Here are my month-to-month earnings:

I made $747.30 from my Android apps in November, then that number jumped to $1234.78 for December. From December 25th to 28th I made over $62 every day. I did not expect that to continue in the short term and it has not, today I made about $40 on Android.

One reason how much money I make on it is important is it is self-perpetuating. The more I make on Android, the more time I can devote to programming Android apps.

[/android] permanent link

Fri, 12 Oct 2012

Processing large data files

I guess noticing this thing shows how little I know about programming, but I have now seen this come at me in two different directions and am now more aware of it.

The thing I am talking about is when I am processing a large data file with a program. The large data file is in a certain data format. So initially, since it is theoretically the easy way, I try to download the entire file into memory into a data structure which fits that data format. Once that is done, I start processing that data structure. Usually what I'm doing is in one way or another, translating the data from the form it is in, into another type of data structure, and outputing the new data structure.

The problem is you have this huge data structure in memory, and are trying to manipulate portions of it into another data structure, and it just takes up a lot of resources. The work gets done, but it is too slow. Sometimes all of this memory use starts memory paging, and then the machine slows to a crawl.

My first encounter with this is when I wrote a Java program for my blunder suite of tools - pgn2fen. I would take a PGN (Portable Game Notation) file that was 7 megs or so, load it into memory, and then convert every move of every game in that PGN into a FEN (Forsyth–Edwards Notation) data structure, which represents a chess board position.

Initially, I would load the file as a linked list of Strings, and then send that entire list as a parameter to various classes. As the program began coming together, I made a big improvement in the code. Now I created a second shorter linked list alongside the large linked list. I would then slice off a piece of the list, like a slice of salami or a banana, and send that slice around to the other classes. The large linked list was rid of the data as soon as it was sliced, and the smaller linked list with the slice itself was discarded once it was processed. I would then go on to slice off the next part of the large linked list, and repeat. The code looks like this:

            for (i=0; i < gameSize; i++) {
                shortList.add(longList.getFirst());
                longList.removeFirst();
            }
            g.doOneGame(shortList);
            shortList.clear();

This change made the program over ten times faster.

I recently faced a similar problem. This time it was with a Perl script translating an RDF file into an XML data structure. In this case, my machine would start swapping and take hours to process the file. Maybe not surprising that it had a larger effect on the machine, as the PGN files were usually less than 10 megs, and this data file is over 240 megs. With my desktop GUI, as well as the RDF data structure, necessary operations and new data structure, my 4 gigs got swamped and my machine started paging. After a few hours the process was done, but I wanted to look into if there was a way to deal with this.

Again, if resources are infinite, it's always programatically easier to just load the entire data structure, do the processing, and output the new data structure. But resources are not infinite. Something I certainly learned doing over a decade of systems administration professionally.

In this case I switched from using the CPAN XML::Simple module, to using CPAN's XML::Twig module. From XML::Twig documentation:

[XML::Twig] allows minimal resource (CPU and memory) usage by building the tree only for the parts of the documents that need actual processing....One of the strengths of XML::Twig is that it let you work with files that do not fit in memory (BTW storing an XML document in memory as a tree is quite memory-expensive, the expansion factor being often around 10). To do this you can define handlers, that will be called once a specific element has been completely parsed...Once the element is completely processed you can then flush it, which will output it and free the memory. You can also purge it if you don't need to output it.

Which is what I do. RDF elements I need I grab with a handler, process, and then purge. RDF elements I do not need I have the handler purge immediately.

The processing now take much, much less memory. It finishes much faster as well. A lot of the time is probably taken by the instant-purging of RDF elements that will never be processed.

Any how, I now see I have run into the same problem twice. It was solved more or less the same way both times - I processed the large, original data structure one element at a time, and when I was done processing that element I would remove it from memory and go on to the next element. Not the easiest way to do things programatically, but a necessity with large data files and limited resources.

[/programming] permanent link

Getting Scheme to do REPL in Emacs

I took a college course last year, half of which was learning Lisp and functional programming. I don't feel I learned that much about either Lisp or functional programming in the course. I had taken a previous course with the same instructor in graphing theory where I felt I did learn a lot. Especially in subsequent courses where I had to learn tree data structures and the like.

Anyhow, I decided to take another crack at Lisp and functional programming. Some of the great and/or successful programmers have a fondness for Lisp and recommend it, even if you don't see it around much any more. As Paul Graham says about his usage of Lisp, "Everyone else was writing their software in C++ or Perl. But we also knew that that didn't mean anything. If you chose technology that way, you'd be running Windows."

Structure and Interpretation of Computer Programs is often touted as a must-read book. When I first browsed through it a few years ago it seemed confusing. I'm not sure why that is, when I look at it now it mostly seems simple and clear. I'm still reading the first of the five chapters. They're very heavy on the "interpretation" part of their title, going into evaluation and eval etc. It's not yet clear to me why they're emphasizing this so much, but perhaps I'll understand as I read through the book.

My college course used Common Lisp. I understand CL is more of the real-world one, with more libraries, but also more cruft and less simplicity.

Scheme is simpler, more elegant, and easier to understand. Scheme defines functions with the symbol define. CL defines functions with the symbol defun. That alone tells you a lot about the dialects.

One thing I like about Scheme is it seems to have a small number of primitive expressions, with a few more derived/library expressions built on those primitive expressions. I like this simplicity. While these Scheme expressions deal with abstraction and things like that, it reminds me of how almost all number-theoretic functions on the natural numbers all derive from three primitive functions - constant, successor and projection, and by doing the composition and primitive recursion operations on those functions. And the computations that can't be done with these three functions and two operations are rather offbeat, like George Cantor's ones, which do little other than disprove you can't do every natural number computation with those rules.

I also like that Scheme clearly marks non-functional procedures and forms with exclamation points.

Especially since Lisp is not heavily used nowadays, it seems obvious to me that people should first learn Scheme as that seems the best language to learn in. If they want to do some real world stuff and feel CL has better libraries or whatnot, they can then shift to CL.

So anyhow I've been going through SICP. The initial expressions could mostly be done with the Scheme interpreter guile. It does not have readline support by default like CL does, so I put into my .guile profile:

(use-modules (ice-9 readline))
(activate-readline)

As the programs became more complex, I wanted a more sophisticated REPL. Emacs seems to be what Lisp programmers use. I am not well-acquainted with emacs, even though I first started using it twenty years ago! I usually use vi, or nano, or Gnome gedit, or Eclipse's editor, or the like. Anyhow, doing elisp with Emacs is easy enough, but using Scheme is a little bit more work. I spent some time looking at it today and got it put together. Oddly, there's not really one place on the web which tells you how to do this.

In my emacs init file I now have:

(setq scheme-program-name "guile")
(global-set-key (kbd "C-x C-e") 'scheme-send-last-sexp)

I also have:

(split-window-vertically)

Just so I don't have to do "Control-x -> 2" when I start Emacs. If I start using Emacs more for editing, perhaps I'll comment that line out.

So I click the bottom window, type "Escape-x" and then "run-scheme". Then I click the top window and start typing in expressions. I usually do "Control-x Control-e" after each one to evaluate it. It evaluates in the bottom window which runs guile. I had the scheme-program-name set to scm and was running that for a bit, but switched to guile. Don't know much about either aside from that both seem to be copyrighted by the FSF, but the FSF seems to push guile more, and also guile has a nice (help) facility.

Anyhow it is running well enough for now. I'd like to improve my Scheme REPL environment a little more, but it is working OK for now.

[/lisp/scheme] permanent link

Wed, 20 Jun 2012

Android, and porting C++ and OpenGL via the JNI

I've been interested in the idea of porting free software to Android since I started working with Android. The first free software programs I considered doing an Android port of were written in Java. The reason I looked at Java programs first is Android seems to have a slight preference for Java over C and C++.

When investigating various Java programs for potential ports, I realized that porting the UI portions of the programs over, particularly ones that used Java graphical libraries such as awt or swing, would be difficult. Android does not implement these graphical libraries.

So then I began investigating free software Java libraries. One popular one which caught my eye was Jackcess, which could read Microsoft Access database files. I wrote a little Android UI wrapper around the library, and within a few days was able to release Panacea Database. Since its release, I have added more functionality to the program. I still have not tapped all of the library's functionality, such as for database creation.

OpenGL

The idea of porting C and C++ free software programs to Linux, especially ones using "OpenGL" family graphics, has been in the back of my mind for a while. An informative conversation I had with Katie from Golden Hammer Software at the 2011 Android Developer Labs pushed me along this route as well, not just in learning about the technical aspects of porting C++ apps to Android, but seeing how it was feasible.

When you're looking at doing OpenGL work on Android, one of the important things to know is that Android does not do OpenGL. Android handles OpenGL ES, which is a library which only handles a subset of what OpenGL does. OpenGL ES does not have all of the features that OpenGL does. For example, OpenGL ES does not handle OpenGL begin and end commands. You can not directly specify rectangles on OpenGL ES like you can on OpenGL. And so on.

Apple iOS uses an implementation of OpenGL ES as well. Porting C or C++ code which uses OpenGL ES from iOS to Android (or vice versa) is not that hard. This in fact is what Golden Hammer Software did. Porting Windows or Linux code that uses a full OpenGL library to Android is a much more difficult enterprise.

SDL

Porting a C or C++ program that directly links to a full OpenGL library to Android is going to be a little bit of work. This brings us to the Simple DirectMediaLayer (SDL) library. The Simple DirectMedia Layer is a cross-platform multimedia library designed to provide low level access to UI elements of a program (audio, keyboard, mouse, joystick, 3D hardware via OpenGL, and 2D video framebuffer).

Many programs that directly depend on the SDL library have no direct dependencies on OpenGL - the programs use SDL to mediate access to the needed lower-level backend libraries.

Most programs that depend on SDL were written to depend on the SDL 1.2 or lower library. SDL has being rewritten since version 1.3, and is not backward compatible with 1.2. Here, we are only concerned with SDL 1.2 and lower, which is what the majority of the software out there uses. There is a unofficial port of SDL 1.2 to Android, which was mostly done by Sergii Pylypkeno (pelya).

Pelya has ported 13 SDL games to Android and put them up on Google Play. One of the apps, OpenTTD, has had over 100,000 downloads so far, and another, Free Heroes 2, also has had over 100,000 downloads. FH2 currently has a rating of 4.2 out of 5, so people seem to be happy with the port. With these games done, pelya has said he is finished porting any more games, but he is still maintaining the SDL 1.2 library for Android.

His library has its own unique little build system. I am developing on an Ubuntu GNU/Linux desktop, and am comfortable with using the command line if need be, so it is fine with me.

The way he sets things up with his build system, he has the jni directory with various libraries a lot of sdl applications will need, such as of course sdl itself, the freetype library, the jpg library, the physfs library, and other such libraries. Among these he has an application sub-directory named application. Within it is a link called src which points to the application being ported within it - such as OpenTTD, or Free Heroes 2, or whatever.

I started off by trying to build every application he had within that application directory. He suggested to try ballfield first, and that is easy to compile and test. Grafx2, Jooleem, Kobodeluxe, Milkytracker, Openal-demo, Opentyrian, Regression, Simplemixer, Test32bpp and Testmultitouch all worked OK. Others failed before compiling for various reasons, or did launch but were still broken - perhaps I needed to tweak the settings more.

He published then unpublished Jooleem. I thought it was pretty cool and e-mailed him saying I wanted to release it, but was there some unknown reason he unpublished it to Google Play. He said there wasn't, so I did some work on it, then published it. He may have been right - the game does not have a high download rate, nor does it have a high retention rate compared to other SDL ports I did later.

Having some experience with working with the stuff he ported, especially Jooleem (which I now call Bubble Boxem), I decided to try porting a game that pelya had not tried yet. Circus Linux was a small and simple program that used the SDL library, so I decided to port that. I succeeded in porting it as well.

Much of what is needed is in pelya's instructions. First you want to compile the program. The instructions explain how to do that. If there is a data subdirectory, it should be zipped up, moved to AndroidData as the instructions explain, split if necessary and if split, the original data.zip removed. You want an icon.png file for the program icon. Then once you get it compiling, you want it to run. If nothing appears on the screen, __android_log_write and __android_log_print can help. Start at the beginning of main(), looking for output in logcat, then continue until you find the first problem. Then the second one. At some point, hopefully, the program will load.

Why SDL programs won't compile or run can differ from program to program, but I've found common themes. The first four listed are the most important to remember.

  • The C++ code says to run in hardware mode instead of software mode when Android can not do so.
  • Looking for directories with configuration files and graphics can be another problem, you have to set it up properly.
  • Check if it is looking for defines in a config.h file. These defines will have to be set properly for Android. Also look for similar defines not in the config.h files, like a define for LOCALE or the like.
  • The SDL_SetVideoMode call might have parameters Android can not handle
  • Pelya's framework script does not compile C++ files with a suffix of cc instead of cpp.
  • Stuff from iostream like cout and cerr do not work out of the box. Neither do XM audio files

The above list covers every problem I've had so far with compiling or getting a screen to come up on Android.

Now that something is coming up on the screen, you may want to consider replacing SDL_UpdateRect calls with SDL_Flip calls, or you may get some gibberish on the screen. The SDL port does not currently handle SDL_UpdateRect calls well.

You also want to make sure the volume button is working when in the SDL app. if you want to use it, make sure it is not redefined as a key. Explicitly listening to KeyEvent.KEYCODE_VOLUME_DOWN or KeyEvent.KEYCODE_VOLUME_UP and manually implementing adjustVolume also works.

Another consideration is the keyboard, and seeing visible text on the screen. With pelya's framework, text appears in an EditText (which I sometimes move around on the screen, change colors of etc.) You can have a keyboard pop up on the screen and so forth. It is something to think about

Sometimes the game just needs the arrow keys, and maybe a few more keys. Pelya's framework has mechanisms to deal with this. I use one such mechanism in my Ice Blocker game, when a player wants to switch from horizontal to vertical (or vice versa).

Future plans

So far I have ported six games to Android using pelya's Android SDL library. I am looking to see if there are any more good free software SDL apps to port over. Most of the games I've ported were primarily mouse-based games - they are now touch-based games. So the aesthetics have not changed that much for those particular games. In addition to this, most of the games I've ported have had a fairly simple graphical library dependency - on SDL. In the future I might port games with more of a keyboard (or arrow key) dependency. I also might port games which have more of a direct OpenGL dependency.

I also am interested in expanding the existing games I have. I am interested in doing more work through the Java/C++ JNI bridge in the games I have already done. I also am thinking about how to handle different languages and internationalization. Android's bionic library can not handle locale. This means gettext and it's portable object (po) and message object (mo) files do not work out of the box. Garen Torikian has been nice enough to give me some advice about this, and I might do translations in something along the lines of how he did it in Neverball ME

[/android] permanent link

Wed, 28 Mar 2012

Some success...


So I have had a little bit of success. In December of 2011, I was on one ad network, Admob, and made $6.66 for the whole month from them. I am now on three ad networks (Admob/Adsense, Millennial Media, Inmobi), and I made $7.62 from them in the past two days, more than the whole month of December. I would like to increase that in the future, but for now, $100 a month coming in is great. Of course, I want to roll as much of that as I can back into the business.

The breakthrough happened in late January. I have written Android apps from scratch like Bouncer and Love Poems, and I ported an open source Java library to Android with Panacea Database. Looking at a full-fledged open source Android project, FBReaderJ, I noticed some modifications I could make to it to improve it, and that would be for an audience without much overlap with the existing FBReaderJ audience. FBReaderJ is GPL licensed, which worries some people, but myself less. Anyhow, I released my version of the app, "Free books to download & read" on January 24th. By the last day of January, 2425 installs a day were happening, by February 5th, 11000 installs a day were happening. Daily installs ranged from over 8000 to over 11000 a day until February 20th. The install rate is still over 2000 a day. As is normal, the active installs in percent has been going down over time, but it is still over 35%. It currently has over 119600 active device installs. There is currently one ad - right before someone goes to a book - it has been requested from 13000 to 23000 times a day over the course of the past two weeks.

Having had success with modifying an open source project, I doubled down, and on February 12th I released a modified version of OI File Manager, another open source Android project. I chose it because it was open source, because I had thought of doing a file manager for a while, and because it had a wide appeal - it is not a niche product like Panacea Database or Bouncer, many people can find it useful. I wanted to release another app with wide appeal to ride the wave of Book Reader. And it did so, it has over 4239 active device installs, which for my five apps is second to only Book Reader. And has been achieved in six weeks, while I have been working on apps like Bouncer for ten months.

I do have my eye on one more Android open source project, but I have turned back to doing an original project. It uses Andengine, but is actually an app, not a game. It is original as far as I know, nothing else on Android does it in the manner mine will, which is much better than the handful of existing ones that are related to this app. I have to see how much work I am going to do on it before releasing it. It is more toward a niche product than a general one, but it is not a small niche. Anyhow, much work to be done on it, although I already have a decent prototype for one implementation of it.

Book Reader was making over $20 a day when the downloads were first flying. Also, I had an ad on the page seen when the app was opened for the first time, which I now do not have - although I may put that back. Anyhow, I rolled $100 of that Admob money into ads. While I was running my ads, Admob dropped their minimum ad bid to $0.01 a bid. So I dropped my bid to that. The money went mainly to buying ads in Brazil for the File Manager. Ads seem to boost downloads from the target market, even when they're not running, don't know all the variables which cause that although I can guess some of them. Anyhow, I know have over 1000 active users from Brazil for File Manager that I probably would not have had any how. Were they worth ten cents a head? Well, the initial buys were overpriced before Admob's price drop. Also, it was something of a test. Also, I want to roll my profits back into the business and couldn't think of a better thing to spend it on. Even with that $100 spent, I'm still getting over $350 from February Admob profits for Book Reader. Those kind of dollars came from the initial pop, I'm now more at the $100 a month level, as I said before. Although if I had more ads in the Book Reader app, I could probably make more. Although I want to avoid having ads over the actual book, as that is annoying.

In terms of running Admob ads - you can choose the devices to target, the SDK version, the country (and sometimes more specific location), whether to target mobile, wifi or both, gender and age group. Transfers of $50 or over from money I was owed to running ads gets you a small bonus of free ads. Each campaign is $10 a day minimum. Minimum bid nowadays is 1 cent a click. You can see conversion rates for app installation for app download ads.

The annoying part for Admob is the approval process. First you have to get approved to be able to transfer money from your balance to ad campaign budget. Then campaigns have to be approved. After I was approved for balance to budget transfers, I transferred $50 and submitted a campaign. A week later it still sat unapproved, so I sent them an e-mail, then it was approved. Contrast this to Millennial Media, who approved a campaign for me recently within hours. You'd think Admob would be more responsive to me wanting to give them my money.

So on that Millennial Media campaign - I noticed a few days ago that the paltry sum I made in February from Millennial Media had been put into my balance. The sum was paltry because I was not even signed up with Millennial in the beginning of February. Anyhow, I took the dollar or two and put it into a campaign in Norway for File Manager. It was approved within hours, which was the good part. One downside was the minimum 5 cent bid - 5 times what Admob does. Also the targetting is not as precise for kinds of device and such. You can target to country though, which I did. I wonder if "Android" goes out to Kindles, Nooks and the like, I hope not as it would be wasted money. Anyhow, my $1.20 daily budget was filled and I got 24 clicks. I'll probably do a bigger one next month for MM when my March money clears, maybe for different countries. Another nice thing about MM is I'm not stuck with $10 a day campaigns! But unlike Admob, MM keeps the money you earn for two months plus instead of one month plus, so I may as well roll the money back into ads.

I signed up for Inmobi as well, but you have to talk to them or something to get approved to transfer money from balance to budget. It's not worth it at this point.

I also might do Adsense for mobile ads. I'll have to see. I should get the $350+ by the middle of next month, so I have some ideas for the money. I might spend some money for a contractor to do some work on Book Reader - which I plan on using myself and sending back to FBReaderJ as well.

I had used Admob as my sole ad network prior to January. One reason I chose them is they were known to be reliable about sending checks - in fact, they already sent me one last year. Also, they have a low check sending threshold - if you make $20 in a month, which I'm now easily doing. They also send the money within one month plus. If I made money on ads on January 1st, or January 30th, that money would get sent to me on March 1st and would arrive, usually around March 15th in Paypal. For Millennial Media and Inmobi, the amount of time is longer.

But anyhow I wanted other ad networks. For the sake of redundancy for one - if there was some problem with Admob, I'd still have two other sources of income. Also, perhaps I'd get some better deals, or extra functionality, which I have gotten. Also, I like the idea of keeping some competition open for the ad networks - it benefits developers to have a few competing ad networks out there. I read a report which said the top Android developers usually have as the top four packages includes - Adwhirl, Admob, Inmobi and Millennial Media. That dovetailed with what I had heard already so I went with Inmobi and Millennial Media.

Inmobi seems to do everything manually, and even over the phone. My app approval seemed to be in limbo until an e-mail back and forth. Then I had a phone conversation, where the rep said they wanted me to push up the number of requests I was getting as they thought it was too low. This conversation happened a month ago. I said my Book Reader got a lot of hits so submitted that. It was pending, then they said they wanted more info on my address etc., so I put that in and it is still pending. Not that I mind much, I submitted the app at their urging, to some extent. As I said before, to be able to transfer earnings balance to an ad budget requires manual intervention as well. Well, Admob and Millennial Media are more responsive without hassle, so I'll deal with them more in terms of buying and selling ads for the time being. Inmobi is still the primary target for File Manager ads though, with MM and then Admob as fallback, and 80% of traffic is directed to Inmobi via Adwhirl right off the bat. Aside from responsiveness, I'd need to make $1.67 a day from Inmobi to get a monthly check from them, and right now that is more like 28 cents a day, so I haven't even hit that minimum yet with them (or Millennial, which is about $1.03 a day).

I suppose eCPM, RPM, CTR, etc. are important in differentiating ad networks, but one overriding thing is fill rates. Admob and Adsense integration has been increasing as time goes on, other than it taking a day for clicks, CTR, eCPM and revenue to update (but not impressions or fill rate), the two are very integrated. And for normal apps, the fill rate for this is usually over 98%, if not 99%. As opposed to this, Inmobi has had a 21-54% fill rate for me over the past two weeks. Millennial, which is getting a fraction of the direct File Manager traffic Inmobi gets, but which does get its run off, has had a 77-86% fill rate for the past 9 days. The major slackoff from them is for countries like Brazil and Poland, they don't have the presence Google can afford there yet. But for the US, France, Germany, Japan etc., their fill rates have been on par with Admob's. With Adwhirl, lower fill rates are not as big a deal, but it takes seconds for Adwhirl to miss an Inmobi ad, and the Millennial ad, and then maybe even an Admob ad before putting up an Admob "Adwhirl" ad, and by that time the Activity with the ad may have been clicked off.

[/android] permanent link

Sun, 01 Jan 2012

Happy New Year


My New Year's started out the right way, one of my apps, Panacea Database crossed the 5000 download mark. It's kept to a 40%+ active/net install base as well, hopefully with some of the updates coming down the pike it will maintain, or even improve, that percentage.

[/android] permanent link

Thu, 17 Nov 2011

Profit

Looked at Admob today, I finally pushed past $25 in payments from my Android applications. $25 was the one-time fee I paid to get on Android Market. So I've made $25.16 from my three mobile apps so far, and am now 16 cents in the black. Admob sends you money when you hit $20 for a month, so in December I should be getting a check for October and before. In addition to the Admob money, Samsung was also nice enough to give me a free $500 value 10.1 inch tablet to write tablet-sized apps on. And with my latest update of Bouncer out this morning, all three of my apps now handle "extra-large" displays, as Android calls them.

I was contemplating that I'm now in the black this morning, and felt good about it. My thought in terms of my business of putting out Android apps revolves around having no recurring capital costs, and if at all possible, no capital costs at all. Particularly in terms of some web page that an app must contact that I'd have to pay $10 a month or so for. Right now I just code the app, push it to Android Market, and collect the ad money. Aside from the slow wear on my keyboard, mouse, screen etc., the only expense is my time.

I wrote a framework for a spreadsheet, and did a number of spreadsheet features for it. Then I worked on getting pre-2007 Excel files onto it, which I did. Then I worked on getting Excel 2007 and 2010 (.xlsx) files onto it - and got stuck. There are two possible paths to fixing this, an easier one of I can get things down to less than 65,536 methods, and a harder one if I can't. I took a shot at the easier path, and that just might not be possible, as I got rid of a lot of methods. I may be able to pare down a few more. If not I'll have to go on the harder route. Anyhow, I put the code up on Github.

A month ago, I finished rewriting the layout of Panacea Database for all major (and minor) device sizes and screen densities. Then I added a feature to remember the last file opened. I did some testing and QA on the last file feature, but perhaps not enough, as it seems there have been some crashes since then which probably pertain to that. Which I am looking into. People seem to want column sorting, which I can work on implementing. I might throw in some SQLite stuff, depending on how easy it would be.

So all of my apps have decent layouts for all major (and most minor) devices, which I am happy about. So now I am on to my new apps, as well as fixing bugs and implementing new features in Panacea Database.

[/android] permanent link

Mon, 10 Oct 2011

Another Android application

I released another Android application - Love Poems. It took off initially - by the fourth day there were 442 downloads, with 280 of them active installs. But then that slope of adoption leveled off, it fell in the Market rankings etc. Not sure what hurt it - I did an update allowing users to increase or decrease the text size, while someone gave the app a two rating. It then sunk in the Market rankings and downloads leveled off. A few days later I released an update with a few more poems, and also adjusted the text sizes a little. I will do updates in the future, in terms of both poems and display tweaking.

Android is continuing to gain market share. Here is the browser usage seen from various mobile operating systems, according to the web logs of the Internet's 7th most trafficked site, Wikipedia:

As the chart shows, the iPhone and iPad are doing well, as are Android smartphones. Windows Phone 7 is moribund - it only is 0.04% of traffic. There is more Android Honeycomb traffic on Wikipedia (0.05%) then Windows Phone. I guess we'll see how they do with Windows 8 and Mango which is supposed to launch in 2012, but they are way behind Apple and Google. The modern tablet market is newer than the smartphone market, so maybe they'll have a shot at competing there. I downloaded Windows 8 preview and developer kit and had a look at it. Their Store is free for developers, although applications are approved first.

I'm currently developing a fourth app. Won't reveal all details until it's released, but it uses Fragments and the ActionBar. Android's compatibility package does backward compatibility for Fragments but not ActionBar, so I am using Jake Wharton's ActionBar Sherlock for backward compatibility in ActionBar usage. I have that all implemented already actually. I haven't done all the happy stuff you can do with tablets and Fragments yet, we'll see about that, it's not an essential element to the project, but with all the usage of ActionBar and Fragments, redesigning it to do that will be easier. This new app may use SQLlite as well, so I may be looking into SQLlite.

I was invited to the Android Developer Lab in New York on August 24th. It was good - I met some interesting people, and they pointed us in the direction of where Android is going, which helps me point my development in that direction.

I've been doing a bit of work on Panacea Database's layout. I moved a lot of stuff into XML. I'm using scale-independent pixels and density-independent pixels as much as possible, as well as adjusting the size of buttons by layout weight and that sort of thing.

One thing I've been doing - I change how many rows I display when fetching rows from the database, and the scale-indepedent pixel text size of the display, depending on what screen size I have, what orientation I am in, and to some extent, how many dpi are on the display. The way I've been doing this is putting a "gone" TextView in the XML, and from my code, reading the number of rows to display from that. Not sure if its best practices, but it works - if I find a better way I'll do that.

[/android] permanent link

Sun, 10 Jul 2011

Android development

According to Alexa.com, Wikipedia is currently the 7th most trafficked web site. They are also one of the few large web sites to allow everyone glimpses of their web log analysis. I mention this in a previous blog post. In December 2010, Android devices made up .078% of Wikipedia's web traffic. At the end of May 2011 (June numbers are not done yet) that was up to 1.16%. So Android traffic on Wikipedia increased about 48% in six months.

Actually, the six month increase of about 48% from December to May was more-or-less matched by the one month increase from November 2010 to December 2010, which was a 47% increase in traffic. I guess a lot of people got Androids in their Christmas stocking, or next to their Hanukkah dreidels...

So anyhow, I released my second Android application, Panacea Database, on June 11th. I definitely followed the Release Early, Release Often philosophy for this one - I got the idea for it on June 7th, and by June 11th it was published.

I guess another party writing a nice Java library, which someone else posted a bug report, which was subsequently fixed, seven months before, that fixed all the Android bugs, helps. Thanks Miha Pirnat, wherever you are!

So what it does is iterates table rows and does searches for Microsoft Access style files on Android. Or Microsoft Access 2000 to 2007. With a lot of Access 2010 working. I actually just sent a patch in to the library people to fix a bug. Or implement a kludge to get around the bug anyhow - until I'm interested in dealing with Attachment data types, they'll have to write a fix.

So both my apps have passed through the 500 download point. Bouncer has a 41% active/total install ratio, Panacea Database has a 57% install ratio. Why is that? Well to quote a critic on the Android Market, Silas, "Move to SD card!!" The app has a lot of PNG's and JPG's and is 3.8MB. Maybe I will move some of that to the SD card, who knows? It's an issue I have to figure out how to deal with.

My Admob revenue for the last week is 79 cents, $1.52 for the week before that, and $1.28 for the week before that. My first goal is $100 a month in revenues. Whether that be by ads, sales or whatever, it does not matter.

Initially I thought of just tossing out apps left and right and seeing what stuck. But you put an app out and you have to maintain it. And I'm just one person. For now anyhow. I don't want lots of one-star ratings for my apps on Android Market. The lowest I've gotten were two three-star ratings for Panacea Database. One wanted me to fix the bug where next-lines in a text data type would make a button disappear. I've partially patched that already, and have a full patch for that (hopefully) that I will release, oops, I mean publish, soon.

[/android] permanent link

Mon, 20 Jun 2011

A Guide for the Android Developer Guide
I wrote A Guide for the Android Developer Guide which attempts to translate Googlese into English

[/android] permanent link

Tue, 31 May 2011

Bouncer, my first Android application

So, I have published my first Android app (the concept for which someone else described to me). What have I learned about Android development and such since then?

My first (unpublished) Android app was heavy on ListView. It was a tree of ListView's really - the top ListView went into sub-trees of ListView's, until a leaf/node on the bottom was reached, which might be something else. I filled out the onCreate method, and an onListItemClick method.

The first screen of my new app was initally going to be a GridView. I then gave up on that. I then created two activities which could go back and forth to one another via clicks (listened to with OnClickListener) via Intents. Then I had them pass information to one another in the Bundles. So now I can pass messages to my sub-trees via the Bundles, and they can be separate Activities.

Having dropped the Gridview, I tried out the TableLayout, which I eventually went with. So now I had my grid-like table of letters on the first screen, able to pass which letter was pressed via a bundle in the Intent to another Activity. I used Buttons for these letters.

I then wanted there to be a tab on the front screen, with the table of buttons in the primary tab, but with people able to tab over to the "About" tab. So I made the first activity a TabActivity, and opened the Activity with the table with an Intent.

I then wanted to change the color of the buttons, but found out it was not all that simple, and learned about 9-patch drawables and the like. So I created my own buttons, which needed their corner rounding to be specified and the like.

Google suggests you put an End User License Agreement in the application. There is a standard class to do this, so I put it on the application.

Ultimately, I want my app to cover all 50 of the US states, as well as the District of Columbia (Washington D.C.) Currently, it covers 46 of the 50. I had the current ID for 46 of the states, at this point in development I started putting up older licenses that may still be valid.

Most of this time I was designing for a high density, normal size screen in a vertical position. About 17% of people using Android's use medium density however. Also, some people flip from vertical to horizontal mode, I even encourage this flipping in the application when the full image is about to come on the screen. So I did some work on making it at least function with medium density setups, and for high density setups when viewed horizontally. I get the display metrics, and then call different layouts depending on what the metrics are.

When to release is always an open question. "Release early, release often", agile development and so forth is the popular credo, and I agree with it for most applications. On the other hand, you can't release too early, especially since Android Market has a rating system and so forth. But at this point, I felt I had enough, and the last four holdout states it didn't look like I would get anything from them in the next few days, so I decided 46 was enough to be useful, that layout looked decent for most phones, and was at least usable for almost all phones. So I released.

One thing I did not do when releasing was release the initial version with ads. Why? Because Admob wants to know where it is on Google Market to give you an ad code, and I had nothing up there yet. I later realized I had misunderstood due to my unfamiliarilty with all of this, I could have put an ad in the initial version. Within a few hours of publishing version 1.0.0, I released 1.0.1, which contained Admob ads.

It's been 28 hours since I released the initial version, and 15 hours since I released the version with ads. Thusfar I have had 78 downloads of the app from Android Market, and have had 55 ad impressions served.

In subsequent versions I plan to improve the application. I will work to get the four missing states, and the District of Columbia. I will put more information about identification. I might put a bubble up announcing updates, but I wouldn't want it to be too annoying. I also have some kludgey stuff in the layout files which hopefully I can clean up, as I learn the Android API better these things can be more smooth.

[/android] permanent link

Fri, 22 Apr 2011

Android

I have been looking over Android's API and have been writing an Android application with Eclipse.

Android use has started to take off in the past months. I have looked at various metrics, one I like is from the Internet's 8th most trafficked sites, Wikipedia. It shows the growth of Android use over the past six months:

The graph y-axis is the percentage of all browsers coming in - mobile, desktop and whatnot. X-axis is the time period of usage - the past six months. The OS versions are listed in the key, although "Mobile other" is a catch-all.

In October 2010, 0.47% of all hits to Wikipedia came from Android phones. In March 2011, 0.98% of all hits to Wikipedia came from Android phones. So that has more than doubled within the past six months.

[/android] permanent link

Wed, 23 Mar 2011

Evince, Ubuntu, python, etc.

I have been corralled into doing some programming in python. So at one point I decide to write a do-while loop and learn - python has no do-while loops. Terrific.

My patch made it into Evince 2.91.92, I'm officially a Gnome contributor, yay. I patched a bug while chasing down another bug. Carlos couldn't reproduce it - I wonder when it manifests itself. The bug crept in in December, and not many people are running evince released since then, so the pool to try to reproduce it is limited. Carlos fixed up my patch so that it wouldn't cause problems going in. I still have to fix that original bug. Actually, I already did, but the fix is trivial, and I want to look over my code again to make sure it's decent.

I also patched the evince package for the upcoming Ubuntu 11.04. It was a suggested backport of a commit. Again, my patch had to be massaged in. I changed the Ubuntu documentation for patches so as to point to the complete method of doing a patch.

I know people make use of git branches, but I never really used it until recently. It is very handy, especially if you're doing a lot of work on something. I will surely be using it in the future more.

[/gnome] permanent link