Using Github actions to regularly test an external project

Suppose you contribute to an existing open source project, and are interested in regularly running a custom workflow (e.g. static analysis, or tests with sanitizers, or other checks) against that project. To give a concrete example: I wanted to start running git’s test suite with various sanitizers enabled (e.g. ASAN and LSAN) in CI, without having to go through the process of including this in git’s official CI configuration.

Github Actions offer a nice and (relatively) easy way of doing this, here’s how I configured daily ASAN+LSAN tests for git. For a TL;DR: version I recommend just looking at the workflow I ended up writing – but for further context and reasoning read below.

Although I was working against an existing Github project, this approach should also work for an arbitary git project outside of Github, or even mercurial/svn/tarball based projects. You’ll just need to do a bit more work if you aren’t working against a source of truth hosted on Github. (Fun fact: git isn’t even a purely Github-hosted project, although the Github repo is one of multiple authoritative sources.)

High-Level approach

Actions can be configured to run on a schedule – in my case I decided that running my workflow once a day was sufficient (I probably won’t even look at the outputs daily, but at least I’ll have reasonably fresh data available on days that I do want to check the outputs). If your project is sufficiently big, you might consider running your workflow less frequently (because they’ll get in the way of each other and/or other jobs you might run using your account.).

Once you’ve figured out a schedule, the actual workflow is pretty simple – each time it is run you’ll want to:

  1. Figure out if the upstream projects has new changes. This isn’t strictly necessary, but reduces resource usage and potentially reduces noise (no point in rerunning the same failing tests if nothing has changed upstream).
  2. Run the tests or workflow that you want. This is very project specific – fortunately git already has some default Github Actions to build the binary and run the tests, and I was able to adapt those for my needs.

Determining if there are any changes

There are many ways to keep track of which commits have already been tested. I was lazy and decided to create a branch in my fork which tracks the upstream branch that I want to test. My fork’s branch is a snapshot of the upstream branch at the point in time when I last tested it, and is updated every time the workflow is run. In other words: if my fork’s branch tip == upstream branch tip, then there are no new changes needing testing. If upstream has changed, then there are new changes needing testing.

This approach has some obvious failure modes: suppose that the workflow is cancelled at the wrong time, or fails for infrastructural reasons (as opposed to actual test failures). My fork’s branch might be up to date, and suggests that we’ve run our workflow against that tip – and we won’t try re-running our workflow (at least until the upstream branch changes again). That’s a bit ugly, but good enough for me – we’ll rerun the workflow as soon as the upstream branch is changed again anyway (Github also sends an email when a workflow fails, giving you an opportunity to run it again manually).

A superior approach would be to record verified commits using e.g. git notes. Our custom workflow could then start by syncing our forked branch, followed by checking git notes to determine if the current tip has already been tested, followed by adding a git note once testing is completed. This is something I intend to implement in future. That said – we would also need to add logic to determine whether a given job failure was due to test failures (no rerun needed) or infrastructural/intermittent failures (rerun desired).

(My tracking branch approach could be modified to update the tracking branch only AFTER a successful test run, but that still requires adding logic to differentiate test failures vs infrastructure failures – so I haven’t bothered to do that yet.)

Example sync & compare job

If you are using an existing Github project as source of truth, you can reuse some existing Github Actions to do pretty much everything you need – my example is almost a 1:1 copy of the Fork-Sync-With-Upstream action’s example:

jobs: 
  sync-with-upstream: 
    runs-on: ubuntu-latest 
    outputs: 
      synced_changes: steps.sync.outputs.has_new_commits # So other jobs know if anything changed
    steps: 
    - name: Checkout next 
      uses: actions/checkout@v2 
      with: 
        ref: your_forks_tracking_branch
        # token: can be added here if you want actions to run on push (by 
        # default, this step uses GITHUB_TOKEN, and therefore no actions are run on 
        # push). Also, using the default token means workflow changes are blocked, 
        # which simply forces you review and push workflow changes manually. (Yes, 
        # the latter means you'll need to manually trigger your custom jobs
        # after pushing the reviewed workflow changes.)
    - name: Pull upstream changes 
      id: sync 
      uses: aormsby/Fork-Sync-With-Upstream-action@v2.3 
      with: 
        upstream_repository: git/git # The source of truth
        upstream_branch: upstream_branch_you_want_to_test
        target_branch: your_forks_tracking_branch
        git_pull_args: --ff-only # Might not work for all projects

Running your custom tests

This part will be largely specific to the project you are working with – in any case you’ll want to make sure you don’t run unless changes have been found:

  regular: 
    # Only run after sync is complete, and only if changes exist
    needs: sync-with-upstream
    if: needs.sync-with-upstream.outputs.synced_changes 
    runs-on: # Whatever you want to run on
    steps:
    - name: Checkout Branch 
      uses: actions/checkout@v2 
      with: 
        ref: your_forks_tracking_branch
    - # Your project-specific steps...

Alternative Approaches

My original idea was to:

  1. Write a custom workflow to run the tests I want.
  2. Write a scheduled workflow to sync an updated copy of the upstream branch being tracked to my fork.

My hope was that it would be possible to run the workflows from 1 against the branch being pushed in 2. Unfortunately that isn’t possible right now: Github will only run actions on push if those actions exist in the ref being pushed. If you tracking branch is a clean mirror of the upstream branch, then your custom workflows by definition won’t be available in that branch, and hence won’t run.

This can be worked around by adding your workflows on top of your tracking branch. But that’s messy: instead of a fast forward, you’ll always have to rebase your branch. Also, by default, Github actions run with the GITHUB_TOKEN and therefore don’t trigger workflows on push – this is something that can be overridden with a custom token, but it’s still something that needs to be taken into account.

Posted in Uncategorized

ASAN_SYMBOLIZER_PATH improvements

When running code under sanitizers such as ASAN, you may wish to override the default symbolizer using:

ASAN_SYMBOLIZER_PATH=/path/to/llvm-symbolizer

(Although it’s currently not mentioned in the docs, there are equivalent LSAN_/MSAN_/UBSAN_ combinations should work with those respective sanitizers. Similarly you can even set ASAN_OPTIONS=external_symbolizer_path=/path/to/llvm-symbolizer. Regardless of variant, they all suffer from the same issue that I will describe below.)

Somewhat surprisingly, the implementation used to expect that the binary itself had to be named exactly llvm-symbolizer. Suppose you want to do the following because you built with clang-11, and the equivalent llvm-symbolizer binary name includes the version:

LLVM_SYMBOLIZER=/usr/bin/llvm-symbolizer-11.0.0

You would hit the following issue when starting a binary built with ASAN:

ERROR: External symbolizer path is set to '/usr/bin/llvm-symbolizer-11.0.0' which isn't a known symbolizer. Please set the path to the llvm-symbolizer binary or other known tool.

That’s a little irritating – and apparently I’m not the only one to think so [1][2]. (On some distros, the versioned path is just a symlink to a binary under /usr/lib/llvm-N/bin/llvm-symbolizer – hence you can just follow the link as a workaround – but on my distro llvm-symbolizer-N is the actual binary, and I’m hesitant to pollute my system with additional symlinks.)

I managed to hack together a fix, which has now landed in clang’s main branch – my understanding is that this will become part of clang 13:

https://github.com/llvm/llvm-project/commit/3d039f65015f0e7878b77c542a89493dcdd755d0

Posted in software

Tracking Protection for Android’s WebView

Unlike iOS (really just Safari), Android has no content blocking API. Tracking protection is available in some browsers, e.g. Firefox in combination with addons (and also in Firefox’s private browsing which includes tracking protection enabled by default). For fun, we decided to look into whether it’s possible to provide Tracking Protection when using Android’s default WebView implementation. This blog post describes how that was done, and explores some of the implementation details of our URL matching algorithm.

It turns out that Firefox Focus on iOS also had to build their own URL matching implementation: iOS content blocking is current only available in Safari, and not in the iOS WebView equivalent. That implementation was influenced by the design of iOS’s content blocking APIs and file formats, but when you’re not subject to that restriction it’s possible to build a faster approach, so my ignorance of that version wasn’t necessarily a bad thing, as I’ll describe later in this post.

Why would you want to do this? One reason is that browser engines are large – and we wanted to see whether it’s possible to build a privacy focused browser whose size measures in megabytes instead of tens of megabytes – which would require reusing whatever engine the platform provides (in the case of iOS you actually have no choice in the matter, fortunately Android is a little more free). There are actually some drawbacks to using platform-provided browser engines – which will the topic of a future post – but it’s certainly possible to implement tracking protection on top of Android’s WebView.

Tracking Protection Lists

Firefox and Focus use the Disconnect tracking protection lists: these are lists of domains hosting trackers that should be blocked, categorised by tracker type, e.g. Social trackers, Analytics Trackers, Advertising Trackers, etc. Further to this there’s an override “entity” list, which unblocks domains that are owned by a given company whenever you are browsing a site owned by that company. (E.g. if FooBar Tracker Corp owns both foo.com and bar.com, we would allow loading of resources from bar.com while browsing foo.com, even though we’d block all other sites from loading resources from foo.com and bar.com.) You can read more about these lists at the repo where the Mozilla copies of these lists are maintained.

As such, tracking protection is fairly simple: every time a given webpage requests a resource, we match the resource URL’s host against the blocklist. If it’s blocked, we check the entitylist to verify whether there’s an override in place for the current site. Android’s WebView provides a callback that is called every time it wants to load a resource, allowing you to override resource loading.

The iOS content blocking API actually allows for regex based matching on the entire resource URL, which is more complex than what we needed for basic tracking protection. The disconnect lists only work using domains/hosts, which simplifies the implementation somewhat. Focus on iOS originally only supported the content blocking API, and added the browser later – the browser implementation therefore simply reused the same bundled list format. The content blocking lists aren’t used for iOS’s WebView equivalent, although that is apparently changing.

Implementing URL matching

The simple (but not particularly efficient) method would be iterate over the list of hosts every time a resource is fetched. In fact, we could just iterate over the regex’s in the iOS content blocking lists, and check those directly to avoid implementing our own matching.

The original Android implementation was actually a rushed afternoon (or two) hacky proof of concept from our December All Hands – it turned out to be robust and fast enough, so it was kept beyond that time. It might be possible to build an even faster implementation, but this one hasn’t provoked any user complaints yet.

As mentioned, iterating over the list of blocked hosts is expensive, O(nh) for n = number of blocked hosts == very large, h = host length (small). Fortunately at some point or another I had learned about Tries (contrary to what some might assume, an Information and Computer Engineering degree at my alma mater doesn’t actually involve any Data Structures and Algorithms – but that’s nothing a little independent study can’t quickly fix).

Those offer much smaller memory consumption (not that memory consumption is particularly significant compared to what a web engine will need), and much faster lookup [O(h)]:

A trie containing multiple domains.

(In reality, the Trie possibly consumes more memory because of the overhead of each node being an object. More efficient representations are available in order to avoid one node per character, but that didn’t seem worthwhile given that this implementation is already performant enough.)

There’s still a bunch of overhead in various places: we’re using the Android/Java URL classes to extract the hostname from the resource URL, which could well be more costly than the actual act of searching the tree. I haven’t measured in detail yet.

(Building this concluded completed the bi-yearly cycle of proper Data Structures and Algorithms construction  – I’d last been able to build some trees for a bookmarks folder UI the preceeding summer.)

As mentioned above, there’s also the entitylist: this consists of sets of hosts (A), for which another set of hosts (B) is whitelisted (usually those sets would be the same, but that isn’t guaranteed or necessary). This is simply an extension of the same tree: the set of whitelisted domains (B) is another Trie. That Trie is then attached to every node representing one of the whitelisted domains (A) – we simply extend the default Node to have a WhitelistNode, which has a reference to the whitelisted-domains Trie.

Every real project needs its own String implementation

Searching and inserting into our hostname tries involves walking strings backwards. That would either require either some annoying index arithmetic, or reversing the String before insertion/search (i.e. creating a copy of the String). Neither of those sounded like fun, so I decided to add a String wrapper. This is arguably completely unnecessary, but made things a little simpler (and perhaps more efficient). The String wrapper also meant that the Trie implementation didn’t need to have much knowledge about subdomains either, we can just start at the start of our reversed String. (Because we need to correctly match subdomains, but not other domains, the Trie still needs to be aware of full stop being used for domain separation, so it isn’t completely domain agnostic.

We only need to access the String character by character, which is why we can avoid a complete string copy/reversal – if this weren’t the case, there would be little value in a wrapper.

The wrapper takes care of index arithmetic for reversed strings – and implements support for getChar(int) and substring(int). That’s pretty much all there was to FocusString. (I no longer need to miss the amazing days of many C++ string classes…)

substring() copies…

Somewhat naively, I’d assumed that our Java implementation doesn’t create a copy when calling String.substring() – in other words that it would just adjust internal indexes while reusing the same String buffer and/or equivalent behaviour. Without that assumption, there would be little point in avoiding a String copy on reversal, since – thanks to our recursive Trie traversal – we’d be creating copies when traversing that Trie.

It turns out that assumption was wrong: it was true for Java 6, and also for earlier versions of Java 7 – before changing in Java 7u6. I don’t really know where Android’s implementation originates, but it also creates copies. Thus, FocusString was expanded to include offsets, and FocusString.substring() merely fiddles those offsets.

It was hard to predict what the impact of this change might be in advance, since I didn’t have much experience in this area – I discovered that it was actually a noticeable improvement: on my fairly modern Nexus 6P, average URL matching time dropped by about 20% – from approximately 1.2ms to 1.0ms (these numbers are for debug builds with code coverage enabled – that drops to 0.26ms vs 0.42ms for coverage free debug builds, which is even more significant). We already had tests in place which helped verify that things wouldn’t break, so this was a fairly low risk change (I did use this as an opportunity to extend those tests though).

Results

As mentioned above, the iOS equivalent implementation is a lot simpler. It iterates over the lists of hosts, and does regex matching for each host. I decided to port that implementation to Android, primarily to check for consistency of results. Fortunately the Trie based implementation was mostly correct, except for our subdomain matching. Both bar.com and foo.bar.com should be blocked if bar.com is in the blocklist. My Trie based implementation also blocked foobar.com. Ooops. That was a quick fix, albeit one which required making the Trie search implementation hostname aware. Other than that, results have been the same in our testing.

These parallel implementations allowed for performance comparisons. (Note: the underlying regex and other library implementations on each platform might be different, so the difference in results could be very different if both algorithms were running on an iPhone.) On my N6P, the Trie based implementation took an average of 0.3ms per resource URL check, the ported iterative/regex approach took 42ms. Some pages like to load a lot of resources – so that’s a difference you’d notice quickly. It’s possible that my ported implementation was suboptimal, but it’s certainly clear that the Trie based approach was worth it from a performance perspective.

To be fair, this implementation did take more work – and you have to remember that the iOS implementation was influenced by the blocklist file format that iOS uses for its tracking protection API, whereas the Android version was clean-sheet design.

Edits:

Trie Diagram corrected on 10th May 2017, thank you to Gervase Markham for spotting the mistake.

Tagged with: , , ,
Posted in Firefox for Android, Mozilla

Postbuild gradle commands in Buddybuild for Android

We’re currently using Buddybuild as our CI system for Firefox Focus for Android. It’s been a great solution for getting CI running with minimal hassle, and it also provides the infrastructure to quickly deploy builds for internal testing.

Although Buddybuild is usually simple to use, enabling static analysis and code coverage was a little bit tricky. Here are some notes which should be helpful if you want to set up similar tools (or even just arbitrary gradle commands) for your own buddybuilt Android projects.

Custom build commands

Buddybuild doesn’t appear to run a normal gradle build (“gradlew build”), instead it looks like it overrides your gradle setup and call the relevant apk building steps directly. Which is nice (you can select exactly which flavours should be built in the web UI), but it also limits you to running whatever Buddybuild thinks should be run. The web UI doesn’t offer custom gradle commands, it only allows disabling/enabling UI tests. (Apparently there is a code coverage option, but that’s not currently operational.)

This is where the “postbuild” hook, which lets you run arbitrary commands, comes in. All you need to do is insert your desired commands into buddybuild_postbuild.sh, and Buddybuild will run it automatically.

I tried adding ./gradlew findbugs there:

# buddybuild doesn't seem to offer any direct way of running findbugs.
# findbugs is run as part of |gradle build| and |gradle| check, but those
# aren't run directly in buddybuild.
./gradlew findbugs

By itself, that didn’t work, as detailed below.

(You’ll also want to insert any code coverage related or other static analysis commands here.)

Disabled Flavours

We have a sizeable matrix of build flavours, with multiple dimensions – in addition to multiple buildTypes. Some of those aren’t needed for now, so we disabled them using a variantFilter. It turns out Buddybuild ignores that variantFilter: the Buddybuild UI lets you select which flavours should be built (which is probably what necessitates removing variantFilters). That’s fine for Buddybuild, since they call the appropriate gradle commands to build only the desired and configured variants – but it means that calling gradlew findbugs will fail if there are unbuildable variants.

Of course: we had some unbuildable variants. I decided it was best just to make those buildable (those variants might be needed in future, hence fixing this issue wasn’t really a waste of time), but until I did that we got build failures associated with those supposedly disabled build variants.

Unbuildable variants probably aren’t a particularly common situation – our project is a bit special in that we wanted different sources for one module depending on one specific flavour dimension. Gradle doesn’t make that particularly easy, so we hadn’t bothered to make sure this worked for all flavours in the matrix. We just needed to sprinkle some gradle magic to ensure the full matrix could be built – we simply hadn’t seen any need to do this yet given that we weren’t shipping those variants.

As it turns out, I could’ve avoided that because of the solution I found for our next issue:

Unbuildable Buddybuild SDK

We enable the Buddybuild SDK for automatic updates and crash-reporting for our test builds – but we only enable that for our master builds (so we only saw this issue after landing on master). It looks like Buddybuild modifies your apps sources to add the SDK. The relevant dependencies aren’t accessible when running buddybuild_postbuild.sh, meaning you’ll see something like following error when running gradle commands from there:

/tmp/sandbox/workspace/app/src/main/java/org/mozilla/focus/FocusApplication.java:7: error: package com.buddybuild.sdk does not exist
import com.buddybuild.sdk.BuddyBuild; // This line isn't in our sources, huh?
^
/tmp/sandbox/workspace/app/src/main/java/org/mozilla/focus/FocusApplication.java:22: error: cannot find symbol
BuddyBuild.setup(this);  // Huh, again
^
symbol: variable BuddyBuild
location: class FocusApplication
2 errors
:app:compileFocusGeckoDebugJavaWithJavac FAILED

At this point I realised it would probably be easier to just revert all of Buddybuild’s changes. In other words, return to whatever state we build with locally. Since we’re using git, we added the following to the top of our buddybuild_postbuild.sh (replace with your choice VCS’s revert/reset commands as appropriate):

# buddybuild modifies our buildscripts and sources (this is partly to enable
# their SDK, and partly to allow selecting flavours in the BuddyBuild UI).
# We don't know where the Buddybuild SDK lives, which causes gradle builds
# to be broken (due to the SDK dependency injected into FocusApplication),
# it's easiest just to revert to a clean source state here:
git reset --hard

This should also fix the gradle variantFilter issues described previously, so I could’ve saved myself the effort of fixing our variants. Disclaimer: I haven’t actually tested with our flavour fixes reverted, don’t trust me on this…

Final postbuild script

#!/usr/bin/env bash
set -e # Exit (and fail) immediately if any command in this scriptfails

# buddybuild modifies our buildscripts and sources (this is partly to enable
# their SDK, and partly to allow selecting flavours in the BuddyBuild UI).
# We don't know where the Buddybuild SDK lives, which causes gradle builds
# to be broken (due to the SDK dependency injected into FocusApplication),
# it's easiest just to revert to a clean source state here:
git reset --hard

# buddybuild doesn't seem to offer any direct way of running findbugs.
# findbugs is run as part of |gradle build| and |gradle| check, but those
# aren't run directly in buddybuild.
./gradlew findbugs

./gradlew jacocoTestReport
bash <(curl -s https://codecov.io/bash) -t $CODECOV_TOKEN

# More here once we enable further tools

(The most up to date version can hopefully be found here.)

Further CI differences

Another difference that might exist is what (if any) tests are built and run: we only chose to run UI tests on master, and not on our development branches. Findbugs, at least in our configuration, runs over all compiled classes (including tests). Hence we only discovered some findbugs issues after enabling findbugs on master. That one is harder to debug, since we only saw the following in our logs:

:app:findbugs FAILED
FAILURE: Build failed with an exception.
 * What went wrong:
Execution failed for task ':app:findbugs'.
> FindBugs rule violations were found. See the report at: file:///tmp/sandbox/workspace/app/build/reports/findbugs/findbugs.html
 * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
There didn’t appear to be a simple way of accessing the findbugs report, and local builds didn’t exhibit any findbugs failures – until we ran UI tests locally. Once we figured that out, it was easy to fix master again.

Summary

  1. Buddybuild modifies your gradle config and sources to enable flavours and to install the SDK (optional).
    Solution: run git reset --hard in buddybuild_postbuild.sh to get your tree back to an extended state.
  2. master builds might be different due to the Buddybuild SDK (see 1), and also due to tests that are only run on master.
    Solution: enable the SDK and UI tests on a given branch before merging to master if you’re adding commands to buddybuild_postbuild.sh and/or adding static analysis tools. And make sure you’ve run the same tests locally (before running static analysis tools locally) in order to replicate the inputs that findbugs/etc will encounter in Buddybuild if you see failures on your branch.
Tagged with: , , , ,
Posted in Firefox for Android, Mozilla

Fun with VectorDrawable’s (or: how to make them work everywhere)

We recently started trying to use VectorDrawable’s – this is as part of an effort to reduce and minimise app size. VectorDrawable’s are a vector image format, based on SVG. Reliably using them across multiple Android versions, and a diverse medley of devices, has proven somewhat tricky – this blog post is a compilation of what we’ve done to get them rendering acceptably and consistently.

What’s a VectorDrawable

VectorDrawable‘s are an xml vector graphics format, with some similarities to SVG. They’re much smaller than the equivalent png or webp. They are scaleable, so you only need to ship one file, as opposed to 3+ when using raster images (i.e. png or webp).

When should you use them

On supported devices (we’ve tested Android 4+), for all images up to 200 x 200 dp (Google recommend not using them for larger images to avoid performance and memory-consumption issues), so long as you are using support library 23.2 or newer OR you only support Android 5 and newer.

If your app is extremely small there’s little advantage to using VectorDrawable’s, so it’s probably best to just stick to raster images to avoid the various issues detailed below.

Where can you use them

Pretty much anywhere: on Android 5 and newer VectorDrawable’s are supported everywhere by default. On older devices you’ll still be able to get them working, as long as you’re willing to get your hands dirty.

For 4.4 and older you need support library 23.2 or newer. Officially, as detailed in the release notes, VectorDrawable’s are only supported in some components: this includes AppCompat components. We’ve had no issues using them in design support library components, including NavigationView.

But it’s also possible to load VectorDrawable’s directly – which I’ll describe below. (Note: there was a temporarily supported feature for overriding all drawable loading when VectorDrawable’s were placed within a Drawable container, but that was disabled again, apparently due to issues in handling Configuration changes, and increased memory consumption.)

Build system changes

If you’re building for Android 5+, no changes should be needed. If you’re building for lower versions with use of the support library, and using gradle, follow the release note instructions. If you happen to have your own build system that invokes aapt directly (but who would do that?), you’ll need to add “–no-version-vectors” to your aapt invocation.

VectorDrawable loading with 23.4 (and possibly 23.2 – 24.1)

Instead of loading Drawable’s via context.getResources(), you can use AppCompatDrawableManager:

-    final Drawable d = context.getResources().getDrawable(drawableID); // This only loads system-supported resources
+    final Drawable d = AppCompatDrawableManager.get().getDrawable(context, drawableID); // This loads <vector> too!

This doesn’t seem to be documented anywhere, and is therefore essentially unsupported. However it uses the exact same Drawable loading code as the rest of AppCompat, and leads to the same code paths as the (newer/official) 24.2 code. This is what we use for most of our Drawable loading.

VectorDrawable loading with 24.2 and newer

24.2 finally introduces official VectorDrawable support, see the release notes and API docs:

-    final Drawable d = context.getResources().getDrawable(drawableID); // This only loads system-supported resources
+    final Drawable d = AppCompatResources.getDrawable(context, drawableID); // This loads <vector> too!

Version specific bugs

Android 4: corrupted drawable’s (Proguard)

Screenshot of VectorDrawable's exhibiting corruption on Android 4

Plenty of corrupted drawables. You’ll never get the same corruption twice!

This can be fixed with some proguard tweaks:

-keepclassmembers class android.support.graphics.drawable.VectorDrawableCompat$* {
   void set*(***);
   *** get*();
}

-keepattributes LocalVariableTable

This issue has been filed in the Android bug tracker, it doesn’t look like a fix has been landed yet.

Android 4 and 6: missing drawable’s on 4, artifacts on 6 (Arc curves)

On Android 4, drawable’s with arc curves might randomly disappear (this is easy to test if you have a navigationview that you can hide and reopen repeatedly, e.g. in a BottomSheet context menu). On Android 6, such drawable’s can display artifacts (this is a different kind of corruption to what we saw on Android 4: on Android 6, arc curves seem to result in some points in the image being misplaced, as opposed to true corruption).

Image of icons being rendered with artifacts visible.

The bottom 3 icons all exhibit some rendering issues on Android 6

Screenshot of some VectorDrawable's not being rendered on Android 4

Some icons will occasionally disappear on Android 4 (ignore the blurry pin icon, that’s just an excessively scaled png that we’ve since replaced with another VectorDrawable)

After some experimenting, I discovered this only happens for Drawable’s with arc curves. You can check if these are present by looking for the letter “a” or “A” being present in the “android:pathData” part of your VectorDrawable. Or within the <path> if you’re looking at the source SVG.

screenshot of an editor, pointing to where an arc curve is defined

You can spot an arc curve wherever ‘a’ or ‘A’ is used in a vector path

One simple way of getting rid of these is:

  1. Import the source SVG into an SVG editor
  2. Perform an “ungroup”
  3. Export again.

Boxy SVG is fairly lightweight option for doing this. If you’re processing a large number of drawable’s, scripting inkscape might be a more efficient choice. (Make sure to double check the output, there’s no guarantee that any edits will result in purging of arc curves.)

A warning for the road

Our QA team has done some good testing, which meant that we discovered the corruption issues described above early during development. Our release audience is significantly more diverse than what we can test ourselves – and we’ve only been using VectorDrawable’s for an as-of-yet unreleased feature – so we can’t actually guarantee that there won’t be more issues in the wild. The usual culprits have all been tested (every Android version, x86 devices, Asus 4.X devices, some other odd tablets), which makes me more confident that we’ve ironed out most of the kinks.

Our first universally-visible VectorDrawable landed recently, which should let us verify that VectorDrawable’s really are truly ready to ship everywhere.

Other size reduction tips

For larger images (larger than 200 x 200dp), or even just as a quicker fix (getting SVG versions of images is laborious), I recommend looking into webp conversion. Webp provides somewhat better compression than png (although you need to be careful to not compromise image quality), with 30% size savings commonly reported. Android 4.0 only supports non-transparent images (i.e. no alpha-channel), you need Android 4.3 for full support – nevertheless we converted a fair number of images recently. I’ve written a small wiki page detailing some png optimisation commands, and webp conversion tips.

 

Tagged with: , ,
Posted in Firefox for Android

Some Impress Remote Improvements

Over the course of the summer I’ve been adding some minor improvements to various parts of the Impress Remote — both on the Server and Android components.

Remote Deauthorisation & Dialog Improvements (for WiFi connected remotes)

(Implemented at the Paris Hackfest.)

Impress Remote Removal

Prior to this, authorising a remote was a permanent and irreversible action — now you can remove them again (this only applies to WiFi/network based remotes — Bluetooth remotes are managed using system utilities).

At the same time some further improvement work was carried out on the dialog to remove some other annoyances (e.g. occasional flickering), in addition to the fixing of an (admittedly minor) memory leak (shhh, apparently I wasn’t aware that C++ didn’t have a garbage collector when I first started working on LO), and finally improving the pin-entry (i.e. auto-focus on the pin-entry box, and no useless/junk “0” in that pin box anymore).

Ultimately it would be cool to use fancy new widget layouting to assemble the list of remotes, instead of the current custom vcl widget (which would make it easier to add more functionality in future if desired, and there is certainly no lack of ideas in that department), but that’s a story for another day (and no idea how simple/complex the conversion would be).

Emulator auto-discovery

This actually existed in the early days of the Android Remote, but unfortunately got lost at some point. It’s the work of only a few lines but makes the lives of developers simpler — especially so for first time developers who may not be aware of the networking infrastructure around the emulator (the “real” host is reachable under 10.0.2.2 from the emulator).

Emulator Autodetection

Emulator Autodetection

(This was implemented at the LibreOffice Conference Hack-Night, where I was wanting to work on the bugs/features below, but couldn’t actually test the remote on my phone as my laptop’s Bluetooth adapter was borked, and the University WiFi was blocking the remote too. I couldn’t actually remember the correct host IP, and had managed to type it incorrectly on the remote development wiki page back in 2012 too…)

Grid View Current Slide Highlighting

The grid view previously offered no hints as to which slide is selected, that is now remedied:

Impress Remote Grid Highlights

Impress Remote Grid Highlights

The design is far from final, and can easily be adapted in the resource files, but the functionality now exists.

Refactoring and Bugfixing

Over time the code has become a bit untamed — fixing a bug in the laser-pointer mode (where the displayed slide wouldn’t be updated when the slide is changed on a different device and/or server) involved some refactoring that not only deduplicated a chunk of slide-change listening code, but also allowed easier (and more correct/simpler) implementation of some of the above features, e.g. the grid-view highlighting.

The future

There’s never a shortage of ideas/bugs, just too little time. There’s no plan, but some things that might be cool would be features such as storage of presentations on the remote (to allow transfer between PCs), chromecasting of presentations (although that would be very complex / would require using the full LibreOffice on Android porting work + a good chunk more — a very pie in the sky idea), Android Wear integration (probably mainly for showing athestopwatch), in addition to just more polishing of the existing functionality.

Finally, a big thanks to Google for sponsoring the initial implementation of the Impress Remote as part of the Google Summer of Code 2012, Michael Meeks as my first GSOC mentor, and the LibreOffice development community in general for their support and advice.

Posted in LibreOffice

LibreOffice on Android #3 – Calc Documents

After a somewhat painful debugging experience*, it’s now possible to view Calc documents on Android too.

This combines the Calc Tiled Rendering work (thanks to the Google Summer of Code) with the LibreOffice Android Viewer developed by Collabora (thanks to Smoose):

Calc on Android

This work is still on a branch (as some of the changes affect Calc’s normal rendering, and still need to be fully verified as to not introduce unwanted bugs there), but should hopefully be mergeable soon.

* Using the r10-ndk it was possible to at least set (and make use of) breakpoints on methods, however gdb seemed to be blind to any other debug-info, making this a pointless exercise. With the r9d-ndk, gdb wasn’t even able to connect to the application. (Fortunately we can still get log output via adb logcat — resulting in slow but useable debugging.)

(Update: It’s also worth noting that we don’t use the standard ndk build system (the LO build system is hugely complex already), and we have problems with the standard android linker due to the sheer size of what we do, currently we therefore link all our (usually separate on a desktop) libraries into one huge library (but restrict building debug info to only a few modules so that we can still link that huge library without running out of memory), which might also not be so helpful for the ndk-gdb. I’ll be back on my main machine soon, so can experiment with a full debuginfo build there hopefully, but on my laptop that just isn’t feasible.)

Tagged with: , , ,
Posted in LibreOffice

Calc & Impress Tiled Rendering, and LOKDocView

At the Libreoffice Conference 2014 in Bern I gave some very brief talks, both related to my work this summer on tiled rendering (and the possibilities of reuse in external applications):

Calc and Impress Tiled Rendering

A shortish talk (as part of the GSOC Panel) on the implementation of Tiled Rendering for Calc and Impress:

bern14_ahunt_calcimpress_tiled

(Click on image for Hybrid PDF)

LOKDocView: the LibreOfficeKit GTK+ Widget

A very brief lightning talk on how to use our shiny new LibreOfficeKit GTK+ Widget (named LOKDocView). Is hopefully easy to use, hence one slide of real content should hopefully be enough to explain how to use it.

bern14_lightningtalk_ahunt_lokdocview

(Click on image for Hybrid PDF)

Attending the conference was a very valuable experience; allowing me to see past and future work by fellow contributors in addition to useful technical advice from some of the most important LibreOffice developers.

Tagged with: , , ,
Posted in LibreOffice

Calc & Impress Tiled Rendering preview

Recently I’ve been working on Calc and Impress tiled rendering, with some results now becoming visible:

Impress

Impress Tiled Rendering is now integrated into master — there are still some issues, i.e. foreground images are not shown yet (this is a bug that’s shared with calc tiled rendering), and it’s not yet possible to select between rendering only slides, only notes, or both (i.e. we currently default to whatever mode the document was last opened in). However in general it seems to work quite well:

Impress Tiled

Impress Tiled Rendering: Unfortunately no Image rendered

In fact very little work had to be done to get tiled rendering working here — the hardest part was figuring out what part of Impress to plug into: once I’d gotten my head around Impress’s architecture, connecting up the rendering was a matter of a few lines of code.

Calc

The calc work is somewhat more substantial, primarily due to the way that scaling for cell rendering works: calc calculates on-screen pixel based sizings in a number of steps using its own scaling methods, which can sum up to noticeable errors between expected and rendered content (which would result in discrepancies when later compositing tiles). This means that there is a significant amount of work needed in rewriting parts of the scaling: while the tiled rendering itself is beginning to look acceptable, the normal UI for Calc is now partly broken, primarily in that scrolling is rather glitchy (however this is being fixed bit by bit, and it is hoped will be mergeable in a useable state soon). This work is still staying on a branch for now — i.e. until it doesn’t break the usual way of using Calc.

Similarly to Impress, images in the foreground aren’t being rendered yet — this as far as I can tell is the same underlying issue, and is what I’m currently working on fixing.

Calc Tiled

Calc Tiled Rendering: charts work too!

Other Stuff

In addition to the work on actual tiled rendering, there have been some further additions in the surrounding code and testing tools:

  • “Zoom” Controls for the gtk tiled viewer. (Zooming is however quite slow as we’re repainting a huge tile…)
  • A part selector for the gtk tiled viewer, i.e. permits switching between tabs in a spreadsheet, or slides in a presentation (writer documents are however rendered all as one huge block).
  • Associated zoom and part selection methods for the LokDocView gtk+ widget.
  • A quad-tiled widget for testing (nothing to do with real tile composition…): this allows for inspecting tile transitions/borders (and was useful for finding some more glaring issues in the calc implementation).
  • Some automated tests that aren’t yet fully enabled due to some further bugs that have been uncovered (which would cause them to fail).
Tagged with: , , ,
Posted in LibreOffice

LibreOfficeKit GTK+ Viewer Widget

Easily integrating LibreOffice directly into any other application is now a step closer thanks to the new GTK+ lok_docview widget (currently only on the feature/gtkbmptiledviewer2 branch, API and naming liable to change, usual disclaimers, etc.).

It currently sports a very simple api, consisting of the following two methods:

GtkWidget*   lok_docview_new (LibreOfficeKit* pOffice);
gboolean     lok_docview_open_document (LOKDocView* pDocView,
                                        char* pPath);
The gtktiledviewer has been upgraded to use this widget, and looks much as it did before (although with some notable improvements, more below):

New and Improved

New and Improved (no more missing portions)

As mentioned above, there have been some further improvements to the tiled rendering in general:
  • All document content is now rendered, we don’t have missing text/images outside of the top-left section anymore.
  • Alpha channel is now set correctly in the tiled rendering output (less relevant for the widget where we could easily hide that, but useful for other uses of tiled rendering where additional manipulation of buffers can now be avoided).

I’m hoping to get started on tiled rendering support for calc next (currently only writer documents are supported). I’m also optimistic that we’ll be able to merge back onto master (allowing for more widespread experimentation) soon.

(For now we just dump one bitmap for the entire document within our widget — not hugely efficient, but simple and useable. Hopefully in the long run we’ll be able to move to having some form of proper tile compositing and also get rid of bitmap rendering — all of which can be hidden away within the widget implementation.)

Posted in LibreOffice