Arculatot vált az AutSoft

10. születésnapjára teljesen megújul a szoftverfejlesztő vállalat

​Megújult arculattal folytatja munkáját 10 év működés után a fenntartható digitális megoldásairól ismert hazai szoftverfejlesztő. Július 4.-én jubilál az AutSoft, mely alkalomból az üzleti brand is átalakul.  

Tíz év hosszú idő egy vállalat életében, az innovációs megújulásának, szaktudásnak, kezdetektől biztos pillére volt az AutSoft szoros együttműködése a Budapesti Műszaki és Gazdaságtudományi Egyetemmel.

Az elmúlt évtized alatt dinamikus fejlődési utat járt be Magyarország egyik vezető szoftver- és alkalmazásfejlesztő vállalata. A szakmai, baráti közösségből kovácsolódott startup vállalkozás mára egy 160 fős, professzionális, hazai és nemzetközi nagyvállalatokat kiszolgáló innovatív szervezetté vált.

Most egy új időszámítás kezdődik az AutSoft számára, a további fejlődés meghatározó eleme a nemzetközi terjeszkedés lesz.

2020-ban a kihívásokkal teli időszak ellenére is több mint 25 százalékkal növelte bevételeit a cég, és az elmúlt években lefektette a nagyvállalati működés alapjait is.

Ennek motorja Cseresnyés Dóra volt, aki tavaly áprilisban vette át a vállalat vezetését, és vezérigazgatóként a szakmaiság megtartása, emberközpontú működés mellett, az üzleti megújulást tűzte ki célul.

“Külső megújulásunk egyben az új lendületet , felfrissülést is jelenti és képviseli számunkra az elkövetkező tíz évre.” - vallja a vezérigazgató.

Az üzleti márka indulása óta nem változott, a mostani frissítés tükrözi a cég belső fejlődését és a nemzetközi trendeket is jobban érzékelteti. 

https://youtu.be/349cxbHjYck

A cég menedzsmentje számára a tavalyi év megerősítés volt abban, hogy itthon és külföldön is a piacnak szüksége van a modern webes és mobilos rendszerekre.

A vállalat idén ezüstérmes lett, tavaly pedig a 3. helyet szerezte meg az IT Business “Az év legsikeresebb ICT cége” ranglistán, továbbá elnyerte a Magyar Brands Innovatív Márka díjat is. Díjnyertes fejlesztésük a Kodály Intézettel együttműködésben a „Megérint a zene” applikáció, mely projektfejlesztés kategóriában nyerte el az ITBUSINESS AWARD díját.

Az egyedi szoftverfejlesztéssel foglalkozó vállalat továbbra is az innovatív technológiai megoldásokra építi szolgáltatási portfólióját: a mesterséges intelligencia, az ipari IoT megoldások, a VR, AR-rendszerek segítik majd itthoni és külföldi partnereik digitális fejlődését.

Az AutSoft küldetése, hogy egy fenntartható digitális jövőt építsen innovatív technológiai megoldásaival.

Sign in with Apple - a Summary for Developers

This article is a summary of Apple's new technology called 'Sign in with Apple'. If you plan to integrate it into your application or if you are simply interested in how it works compared to other third party services, read on!

During WWDC19, one of Apple's main focus was Privacy. Ever since the popularity of the phone market, Apple was always proud of their transparent data handling. It's worth to note that we will never know how much data companies actually collect, but that shouldn't shadow Apple's newly introduced feature, Sign in with Apple.

Introduction to Sign in with Apple

The feature was revealed by Craig Federighi as a part of iOS 13 among other privacy focused features, like the smarter Location Services, during the Keynote event (Sign in with Apple starts at 40:18)

Craig Federighi announces Sign in with Apple

Apple's own login system, in contrast to already existing third-party services like Facebook Login or Sina Weibo Login, approaches the problem from a different perspective. Instead of sharing all kinds of personal information from your social network account with the developer, it only shares the absolutely necessary ones. While other services give developers the chance to request even more data than they need, Apple is very strict with the amount of required data in their guidelines. Furthermore, with Apple's email relay service, it's not just another, maybe more transparent service, but something developers have to prepare for.
In this article I will explain the most important differences from other services, and give a brief tutorial on how to integrate Sign in with Apple into your already existing app.

Differences from other services

You can choose what you share with the developer

Apple is very vocal about asking only the basics from the user. They can deny applications from App Store if developers try to ignore this guideline. As of right now with Apple's login, you can request a maximum of 2 scopes.
request.requestedScopes = [.fullName, .email]
If we don't request any scope, only a unique identifier will authenticate the user.
In case we request the email address, the user can choose to share only a private, "throwable" address, but more on that below.

Email relay service

No one likes getting spam, and even less people like getting spam and not knowing where it came from. If we use the same address on every site, there is no way to tell which website was the one who leaked it. Of course, there are already tricks to overcome this problem, like on Gmail, putting a . character to our address, or using the + character followed by a string that identifies the site, but all of these are not only service-specific, but also easily detectable. Not even throwable addresses are perfect, since in some cases you can actually get important emails after registration.
Apple addressed this issue and came up with their own feature integrated into Sign in with Apple.

Announcing the relay service

Upon registration, users now have different options on how to share their email addresses. (Obviously, only if it's required)
They can choose from emails that are connected to their Apple ID (mostly a main address and the @me.com or @icloud.com addresses)

Phone screen

Introduced with Sign in with Apple, users now also have the option to choose 'Hide My Email', a relay service by Apple. Upon choosing Hide My Email, the application will get a unique address, that is not only different among users, but also applications. The said address looks like this: 47ce76g3b4@privaterelay.appleid.com
Even if the address gets shared publicly, no one can use it other than the application's developers. Every application that uses Sign in with Apple, is required to be in Apple's Developer Program, and enable the function through the Developer Portal.

Apple Developer Portal

Each of the application's email addresses that it uses for communicating with users has to be registered on the Developer Portal, (or the server that has the application's MX server) thus preventing address sharing between different applications.
If a user doesn't want to get emails from a specific application anymore, they can easily disable it (at least it was mentioned during the WWDC Keynote, but we have yet to see it in practice).
Furthermore, Apple promises that the addresses are working (or at least they were connected to the Apple ID once) so there's no need to verify them with other methods either.

Antifraud system

In many cases application would like to know if the end user is in fact a user or just a bot. Almost every service needs an information like that, but most of the time, the authenticating process is most uncomfortable for the real users. Be it a photo verification or even sending pictures of blurred out ID cards in some more serious cases, the goal would be to get rid of them, while maintaining security. Sign in with Apple offers a solution for this problem as well. While it's not public how Apple determines real users (obviously), they promise if they say the user is real, they probably is. Apple says the devices use machine learning and various parameters (like the age of the account) to guess if someone is real or not.
This might all sound even too good, and since there aren't many real world cases for the functionality yet, we can't be sure how much companies will use the feature in the first place. But more on that later.

Policies regarding Sign in with Apple

Mandatory policies

Exceptions from usage

Design guidelines

Implementing Sign in with Apple with your application

Requirements:

After creating a new Project, go to Target -> Your App -> Signing & Capabilities

With the + Capability button in the top left corner, you can add the Sign in with Apple capability to your app.

Sign in with Apple

The code

import AuthenticationServices

After importing the new class, create a view and add the button to it

let authorizationButton = ASAuthorizationAppleIDButton()
buttonView.addSubview(authorizationButton)

As you can see the button appeared according to your system settings.

English locale with Light Mode

Now it's time to handle when the user wants to sign in. For that, first add an action to your newly created button.

authorizationButton.addTarget(self, action: #selector(handleClick), for: .touchUpInside)

At end of the file (for better readability), add two protocols as extensions for our class.

extension ViewController: ASAuthorizationControllerDelegate {
  // MARK: - ASAuthorizationControllerDelegate
}

extension ViewController: ASAuthorizationControllerPresentationContextProviding {
    func presentationAnchor(for controller: ASAuthorizationController) -> ASPresentationAnchor {
        return self.view.window!
    }
}

And then finally write the touch handler function.

@objc
func handleClick() {
    let appleIDProvider = ASAuthorizationAppleIDProvider()
    let request = appleIDProvider.createRequest()
    request.requestedScopes = [.fullName, .email]

    let authorizationController = ASAuthorizationController(authorizationRequests: [request])
    authorizationController.delegate = self
    authorizationController.presentationContextProvider = self
    authorizationController.performRequests()
}

After running our app, upon clicking the button, the phone will ask us to log in, in case we haven't so far.

Not logged in screen

In order to use Sign in with Apple, you need to have 2FA enabled on your account.

2FA prompt

After a successful login, you can choose if you want to share your real e-mail address, or use Apple's new service that hides it from third-party sites.

Login prompt

Now it's time to write the code that handles the result from Apple's sever. Upon completing the registration, we can handle the data in our ASAuthorizationControllerDelegate delegate.

extension ViewController: ASAuthorizationControllerDelegate {
  // MARK: - ASAuthorizationControllerDelegate
    func authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {
        if let appleIDCredential = authorization.credential as? ASAuthorizationAppleIDCredential {
            let userIdentifier = appleIDCredential.user
            let fullName = appleIDCredential.fullName
            let email = appleIDCredential.email
            let realPerson = appleIDCredential.realUserStatus

            print("nData to saven")
            print("Identifier: (userIdentifier)")
            print("Full name: (fullName)")
            print("Email: (email)")
            switch(realPerson){
            case .likelyReal:
                print("The user is most likely real.")
            case .unknown, .unsupported:
                print("Not sure if the user is real or not.")
            }
        }
    }

    func authorizationController(controller: ASAuthorizationController, didCompleteWithError error: Error) {
        // Handle error.
    }
}

Obviously in a real use, the data we get should be handled in a more sophisticated way, but for this demo, this method is acceptable to showcase the new feature's capabilities.

The result is an ASAuthorizationAppleIDCredential object, which contains the following fields (those that important in order to identify the user) :

var user: String { get }

A String that can identify the logged in user and for that reason, it always has a value.

var fullName: PersonNameComponents? { get }

The PersonNameComponents class has been part of iOS since iOS 9, if you want to read more about it's fields, you can read about it on Apple's website here

Normally, for most accounts, you'd get familyName, givenName and sometimes middleName, but every value is optional, and for some cases, you'll get phoneticRepresentation for example, for Japanese users.

To keep the demo code simple, I just print out the whole object, without any further data handling.

var email: String? { get }

The user's email address. There is no further data whenever it's a hidden address, or the user's own.

var realUserStatus: ASUserDetectionStatus { get }

Apple's help in detecting fraud. While the exact method is unknown (for a reason), Apple promises if the given ASUserDetectionStatus enum has a value of likelyReal (2), then it's very likely a real person who logged in. In other cases, the recommended path is to handle registration how we did before.

Result after logging in

As you can see, after logging in we get all required data, including the hidden email, after using Apple's Hide My Email service.

Since we used the demo code from a simulator, it can't determine if the user was real or not.

Conclusion

Sign in with Apple seems good on paper, but we have yet to see how it can be implemented in real life.
The system seems polished and easy, after all it was made by Apple. But it was only released a few months ago and while this new system seems promising, there are obvious drawbacks as well, like multi-platform compatibility, exiting together with earlier versions of iOS' logins, etc.
The IT ecosystem doesn't only have Apple devices (let alone, devices that run iOS 13+) so it's up to the developers how they will approach this problem. One thing is sure, Apple pushes the technology really hard, so even with a little headache, it's expected that most applications will have this feature, therefore it's good to know the basics of it.

Author: Hina Kormoczi
autsoft.net

Finding the most common words and phrases in a song with SSIS – Part 2

In the previous part of this series, we came to the point that we can read rows of a file into an SSIS job. Now, we’re going on with filtering these rows and extracting the most common words and phrases of it.

Filtering out unnecessary content from data

The next step is to do some data cleaning, because my lyrics files contain some unnecessary content that can distort the analyzing. Just a few examples:

Original Liedtext
[Chorus:]
[Repeat chorus, 2nd verse, chorus]

The problem is with these words that they can get into the most common words, but they don’t belong to the real content, the lyrics. What we need is throwing out all the rows that contain such unnecessary terms like “Liedtext” or “Chorus”. The solution for this is filtering, that can be done with the SSIS component Conditional Split (under Common): the rows, that fulfill a condition, can go on a specified output, while the others have to choose another way.

So, pull a Conditional Split element to the Data Flow under the Lyrics file source component, and name it Filter chorus and liedtext out. After binding the Lyrics file source with the Conditional Split component, the output rows of the Flat File Source will be the input rows of the Conditional Split.

(Bonus: the data flow can be formatted to look good. Just select the whole data flow, then click on the Format menu in Visual Studio, and choose Width from the Make Same Size menu, and then, select Centers under the Align menu. Then, you will reach the look like the picture below.)

Flat File Source and Conditional Split

After double-clicking on the Conditional Split, you can get the window on the next picture to set it. You can add the different outputs of the component into the table, and the conditions to them against the input rows to be written to that output. In the Condition field, you have to write an SSIS expression. Luckily, the boxes above the table help us by providing the functions, operators, variables and the input column values as variables, which you can drag-and-drop into the expression in the Condition field. Additionally, there is a Description box, that shows information about the chosen function.

By our example, we only need two outputs: the rows which don’t contain the words “Chorus” and “Liedtext”, and the others, that contain at least one of them. For the former group, add a new output to the table by typing its name into the Output Name, let it be Text. The rows of the latter group will be the exceptions to the condition of the Text output, so they can go to the default output. Name the default output “Chorus and liedtext” at the bottom of the window.

Now, the only task that remained about the Conditional Split is to express the condition of the Text output. Here, the FINDSTRING function will help us, its description can be read on the next picture in the Description box. It searches for a special occurrence number of a given string parameter in the given text and returns its location in the text. So, the plan is to search for the first occurrence of all the critical words in the text of the row, and we check, that the function didn’t find any of them, so their location is 0 (if the searched word starts at the first character, then the function returns 1 as location). Unfortunately, the function is case-sensitive, so we have to search for the uppercase- and the lowercase-beginning version of the critical words to filter them out. The full condition is this (“liedtext” didn’t occur in my files, so I don’t check it):

(FINDSTRING(Text,"chorus",1) == 0) && (FINDSTRING(Text,"Chorus",1) == 0) && (FINDSTRING(Text,"Liedtext",1) == 0)

Anyway, we could have written these outputs in a reversed way, so that the output, that contains rows with critical words, could have been given in the table, where we could have checked if the FINDSTRING functions return positive values, and we would have replaced the AND (&&) operators to OR (||) operators. In this way, the rows, that fit our needs, would have gone to the default output.

Conditional Split Transformation Editor

Term extraction

The next task will be the peak of our data flow: analyzing the input text and returning the list of the most common words in it, along with their frequency points.

For this job, the suitable SSIS component will be Term Extraction. Microsoft uses the word “term” to describe words and phrases at once. The description of Term Extraction can be read on the next picture. As it states, it searches only for English terms with the help of an English dictionary. Therefore, I recommended to provide English-only texts to the flow at the beginning of the previous article. According to the description, the output stands of two columns: terms and their scores.

Term Extraction Description

Now, drag-and-drop a Term Extraction component (from Other Transforms) to the design surface under the Conditional Split component, and name it Term extraction. Then, pull the arrow from the Conditional Split to the Term extraction, and then, you should see this window below:

Input Output Selection

This means, that you can choose from two outputs, both come from the Conditional Split component that you have set previously. Select Text, because there are the cleaned rows we need, so they will be the inputs of the Term Extraction component.

Now, double-click on the Term Extraction component, and the window on the next screenshot will come up. Here, at the Term Extraction tab, you have to choose the column whose values you want to analyze with this component. So, check the only column, Text (it’s not the same as the chosen output in the picture above, but the column from this output). At the bottom of the window, you can see the names of the two output columns, and you can change them if you like.

Term Extraction Transformation Editor: Term Extraction

Go to the Exclusion tab. As you can see here, you can play with the exclusion terms or another name, stop-words. Exclusion term means words or phrases that you want to skip by the analyzing [1], so you don’t want to see them in the results.

If we want to use exclusion terms, then we have to connect to a database, and choose a column from a database table to get the exclusion terms. For simplicity, we don’t do it.

We could have taken the filtered “Chorus” and “Liedtext” words to a database of stop-words, but I thought it would be an overhead, and I wanted to show a filtering in an SSIS job.

Term Extraction Transformation Editor: Exclusion

At the Advanced tab, you can see several settings you can play with to affect the results. At Term type, you can choose that only nouns, or only noun phrases or both should be included in the results. Phrases stand of more words, and in noun phrases, adjectives or numbers can be included, too. [1]

Under Score type, you can select the scoring method of the terms: it can be Frequency or TFIDF. The former measures the occurrence number of the actual term, while the latter has the formula below:

TFIDF of a Term T = (Frequency of T) * log( (#rows in Input) / (#rows having T) ) [1]

This means that TFIDF takes also the density of a given word in the text into consideration. The more occurrence of the same term is concentrated to the relatively less rows, the higher the TFIDF score will be. TFIDF points will be 0 to a term, if every row contains it.

Under Parameters, we can set threshold of frequency score or the maximum length of the found terms. For example, if you change the frequency threshold to 3, and you have Frequency set as score type, then a term won’t be in the results if it occurs fewer than 3 times in the input text. At the default “Maximum length of term” settings, if a phrase or a word stands of more than 12 characters, then it won’t be in the results, too.

At Options, you can check case-sensitive term extraction, if you want, but in our case, it’s better to analyze the words in the case-insensitive way.

Term Extraction Transformation Editor: Advanced

After clicking OK, the data flow should look like the one on the picture below. Before continuing the building of the data flow, let’s make it clear what deficiencies the Term Extraction has.

The first one is that by Term type, we can’t choose verbs or verb phrases, so they won’t be in the results, but these could have been interesting, too. For example, I would be curious how often does Manowar use the verbs „kill” or „die”, but I would exclude the too common verbs like “have” or “be”.

The other disadvantage of the term extractor is that it only works for English texts. This is because the term extraction algorithm is strongly built on the internal English dictionary and the English grammar to tag the found words into parts of speeches, taking English plural forms into consideration, and it throws out the words that it doesn’t recognize as an English noun, adjective or number. [1] If we would run this algorithm on a non-English text, then it would throw out everything. So, if we would like to use a similar component that analyzes for example a Hungarian text, then we would have to change the algorithm to take e.g. the suffixes or the Hungarian plural forms into consideration, and we would have to create a Hungarian dictionary.

If we would use a simpler solution, like counting every random words of a text, and making a list of them with their occurrence number, then a lot of irrelevant word would be there. So, in this case, we should create a list of all stop-words or all accepted words, which would be an incredibly big and hard work.

Conclusion

In this article, we filtered out the unnecessary rows of the analyzable content, and then, we set the Term Extraction component to find the most common terms of the text and calculate their frequency points. The next article will be about writing errors and results to files.

References

ARCore for Android developers - pARt 1: The basics

Introduction

Nowadays, augmented reality sounds like a buzzword, but actually, as an Android developer, you have a pretty easy to use toolset to do basic things - like showing a model - with only a few lines of code. The goal of this article is to introduce you to the tools and methods to use with the ARCore framework, focusing mostly on the Sceneform helper library.

First of all, you should have a look at the following guides:

If you are done with the guides, let's get started. You'll create an application in which you can add a chosen model to your augmented environment!

Preparation

This guide and sample application will use Kotlin and coroutines with a twist. All long-running tasks in Sceneform should be started from the main thread, and the library handles concurrency for us, but we'll use the suspending capabilities of coroutines anyway.

You'll need at least Android Studio 3.1 or newer and the Google Sceneform Tools (Beta) plugin to be installed. Hint: always be sure that the plugin version matches the ARCore dependency version, otherwise it could cause serious problems to debug the errors.

Create a new project with an Empty Activity and a minimum API level of 24. This seems pretty high right now, but Sceneform requires it and most of the supported devices are on this API.

Dependencies

Make sure that your project level build.gradle file contains the google() repository, and add the following to the app level build.gradle:

android {
    compileOptions {
        sourceCompatibility 1.8
        targetCompatibility 1.8
    }
}

dependencies {
    // ARCore
    def ar_core_version = '1.14.0'
    implementation "com.google.ar:core:$ar_core_version"
    implementation "com.google.ar.sceneform.ux:sceneform-ux:$ar_core_version"
    implementation "com.google.ar.sceneform:core:$ar_core_version"

    // Coroutines
    def coroutines_version = '1.2.0'
    implementation "org.jetbrains.kotlinx:kotlinx-coroutines-core:$coroutines_version"
    implementation "org.jetbrains.kotlinx:kotlinx-coroutines-jdk8:$coroutines_version"
    implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:$coroutines_version"
}

The compileOptions configuration is necessary because the ARCore library is based on Java 8 features. Next to the usual coroutine dependencies, you may notice the jdk8 extension library, which you'll use to bridge the coroutine functionality with the CompletableFuture in JDK8.

Manifest modifications

Next, you'll need to update the AndroidManifest.xml file:

<manifest ...>

    <uses-permission android_name="android.permission.CAMERA" />
    <uses-feature android_name="android.hardware.camera.ar" />
    <uses-feature android_glEsVersion="0x00030000" android_required="true" />

    <application
       ...
       android_largeHeap="true"
	   ... >
        ...
        <meta-data android_name="com.google.ar.core" android_value="required" />
        ...
    </application>

</manifest>

You're defining the minimum OpenGL version, the CAMERA permission, the AR required value, and restricting the application in the Play Store to AR capable devices.

Add the sampledata folder

The next step is to change the project tab's view mode from Android to Project and create a new sampledata folder inside the app folder.

Switching to Project view

The created sampledata folder inside app

You can put all original model files into this folder. These won't be packaged into the final application, but will be part of the project. You'll use this folder later!

Would you be surprised if I said you are already halfway to your goal?

Plane finding

So let's assume you have a MainFragment or MainActivity that starts when the application is launched. Its layout XML should look like this:

<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns_android="http://schemas.android.com/apk/res/android"
    android_layout_width="match_parent"
    android_layout_height="match_parent">

    <fragment
        android_id="@+id/arView"
        android_name="com.google.ar.sceneform.ux.ArFragment"
        android_layout_width="match_parent"
        android_layout_height="match_parent" />

</FrameLayout>

The root ViewGroup contains only a single fragment element which is referencing ArFragment. This Fragment is a complete, all-in-one solution for handling the basic AR related configuration, checking the ARCore companion application availability, checking the API level, handling permissions, and so on.

Now you can install the application on an emulator - or preferrably, a physical device. You should see something like this (with permission and companion app handling at first start, if needed):

The initial run of the application, with plane finding

As you can see, the built-in Fragment gives us a hand waving icon which guides the user on how to move the phone around, and if the system finds a plane, it highlights it with small white dots. Note that ARCore only works on colorful, non-homogeneous surfaces! So for example, it's nearly impossible for it to detect a plain white wall or floor.

Add your model

Next, you'll need to find a model to use. You could use your own models made in Blender, 3DS Max, Maya, etc., or download one from the Internet. In my opinion, a good source for this is Sketchfab, where you can find free models with CC licensing and a "bonus feature". In many cases, you will face an issue where the textures will not appear on your model when you place it in the AR environment. There are many ways to handle this, but to keep it simple, you may download the model from Sketchfab automatically converted to gltf, which is one of the supported file formats. If that doesn't work either, then I suggest looking for another model, as debugging or fixing 3D models is generally not worth the time as an Android developer.

Because a certain series is so popular right now (and I personally like it too), you will use a Baby Yoda model in the application, this one:

A note about the model: it's made up of around 10.000 triangles and multiple image texture files, which means it's pretty complex. This greater model complexity comes with a greater memory footprint, which is why you added the largeHeap="true" option to the AndroidManifest.xml. At least it looks great!

You should save this as an auto-converted gltf, unpack it, and copy the model file with all related files like textures, .bin, etc. to the previously created sampledata folder. Then, in Android Studio, right-click on the .gltf file and select the Import Sceneform Asset option. This would open up a dialog:

The Import Sceneform Asset dialog

Here you can leave everything on default, and just click Finish.

If everything goes well, a Gradle task would start and convert your model to a Sceneform Asset (.sfa) and to a Sceneform Binary (.sfb) file. You will find the latter in your src/main/assets folder, and this will be compiled into your application. The relation between sfa and sfb files is that the sfb is generated from the sfa, so you should always modify the sfa file to apply any changes to your binary model. At the end of this tutorial, if you find that your model is too small or too large when shown, open the generated sfa file, look for the scale parameter, and set the value to your liking. For the Baby Yoda model, you can try setting it to 0.15.

So right now you have a converted model and a working plane detecting application, but how do you add the model to the scene?

Placing the model

First, you should load the binary model into the ARCore framework. I assume you are familiar with coroutines and use a CoroutineScope somewhere in your application to handle background tasks. For the sake of simplicity, you can also use the lifecycleScope of a Fragment.

private fun loadModel() {
    lifecycleScope.launch {
        yodaModel = ModelRenderable
            .builder()
            .setSource(
                context,
                Uri.parse("scene.sfb")
            )
            .build()
            .await()
        Toast.makeText(
            requireContext(),
            "Model available",
            Toast.LENGTH_SHORT
        ).show()
        initTapListener()
    }
}

Here, you build a ModelRenderable with a given source and await() its completion. The build method returns a CompletableFuture, and the aforementioned JDK8 coroutines library provides the await() extension for it. This component stores the model and is responsible for the render mechanism. The model name in the Uri.parse() call should be the same as the generated .sfb file name.

Then you initiate the tap listener. For this purpose, you have to have a reference to the contained Fragment instance:

override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
    super.onViewCreated(view, savedInstanceState)
    arFragment = childFragmentManager.findFragmentById(R.id.arView) as ArFragment
    loadModel()
}

With that, the tap listener initialization is as follows:

private fun initTapListener() {
    arFragment.setOnTapArPlaneListener { hitResult, _, _ ->
        val anchorNode = AnchorNode(
            hitResult.createAnchor()
        )
        anchorNode.setParent(arFragment.arSceneView.scene)
        val yodaNode = Node()
        yodaNode.renderable = yodaModel
        yodaNode.setParent(anchorNode)
    }
}

As you can see, it's pretty easy to add a model to your AR scene. In just a few steps:

And that's it, you are done! Build and run the application, find a plane, and place the model by tapping on it! Magic.

The final demo with models being added to the scene

Summary

This guide should have given you a small introduction into AR usage as an Android developer. I hope you liked this article, and the small but effective sample application.

You can find the source code here.

We are planning to release more AR related articles, so be sure to follow us!

Language understanding in .NET applications

There is an increasing emphasis on natural user interfaces in modern applications. As systems become more and more complex, it is an understandable desire to hide some of this complexity from the user and enable a dynamic, easy-to-use and intuitive interfacing solution. Examples for this include gesture control, different sensor aided solutions (like motion-, position- and eye tracking) and natural language understanding. In this article, I am going to present a tool for the latter in the form of Microsoft’s LUIS.

Overview

LUIS stands for Language Understanding (also a play on [Natural-]Language User Interface – [N]LUI) and it is a part of the Microsoft Cognitive Services family. It aims to help create models that continuously improve utilizing machine learning.

LUIS aims to collect valuable core information from what the user says, matching it with a predefined intent (assumed goal of the user) and finding entities in it. Developers can then utilize these information in code and act upon them. LUIS can also be integrated with the Azure Bot Service thus enabling the creation of sophisticated bots.

You can visit the US LUIS site here, or the EU site here.

Building a LUIS model

A LUIS model is what the machine learning system will use to try to interpret what the user says and collect further information in natural speech.

Each model has two important parts:

For example, look at the following natural language sentence:

“I would like to enable dark mode.”

LUIS could match this for an “Enable” intent, with the “dark mode” as an entity, so our application would probably know that the user wants to alter a setting or configuration by turning it on (enable) and that particular option is “dark mode” and so it would change the style of the user interface.

Or LUIS would recognize it as a “Like” intent and our app could issue a like, share and follow on our Facebook page. And I’m sure our application would deserve such a treatment, but if we wouldn’t like to annoy the user, here is what we can do to make sure LUIS nails it most of the time.

Identifying the right intents

First of all, we need to decide upon the intents our model will have. This is a highly important and delicate step as LUIS will weight each and every intent against the user’s words and decide the outcome by scores. The trick is not in the numbers of the intents, but rather the conceptual “distance” between them. What this means is the more different the usual wording of what our user might say for intent A and intent B, the better.

For example, we might have two different services and business logic in our fast food delivery application for ordering a pizza and ordering a hamburger, but when the user will phrase it, it will be something like “I want to order an extra-large pepperoni pizza” and “order a cheeseburger”. At first glance it may be tempting to issue two different intents for pizza-order and hamburger-order, but as your service and app grows and you introduce more foods and variables, these intents will be closer and closer to each other resulting in increasing error rate in LUIS.

It would be better to just have the intent “Order” and other information will be put into entities, like so:

“I want to order an extra-large pepperoni pizza

order a cheeseburger

I will try to give some tips about how to select distinctive intents:

The “None” intent

Every model comes with a “None” intent by default. This is used by LUIS to group everything that is outside of your application’s domain. If the model wouldn’t have this, LUIS would try to force a valid intent onto every user utterance, which is not the desired behavior. None helps LUIS to signal that it possibly encountered something that was not meant to be a voice command by the user (or that our model needs improvements).

It is considered best practice to have at least 10% of all the example utterances put into the None intent, or the very least one utterance for every other intent. You can put some unrealistic (and funny) utterances here, that doesn’t have the slightest chance of corresponding to any real user scenarios, but it’s best if you try to come up with some examples which are close, but differ significantly from your intents, or something that the user might say, but it can be safely assumed that he/she didn’t meant it as a voice command. E.g.:

Adding entities

The next step in building a model is defining entities. As I mentioned above, entities are much like variables, they add detail for our application to use. An important distinction between entities are whether they are machine-learned from context or not. The latter are defined by the developer for exact or pattern matches and will not be modified by LUIS.

There are a few types of entities in regards to how they function, which are the following:

List, regex and pattern entities are non-machine-learned entities.

Improving performance

As an active machine-learning model, LUIS might require some work to maintain precise functioning. Luckily, the portal offers some useful tools and insights into how our model performs and how to improve upon it.

In the dashboard, we get a summary, a training evaluation which shows how many queries were successfully predicted, how many were unclear or incorrect. The portal also shows the top examples of what these intents were that we can improve on.

An important metric here is called data imbalance, which points out if an intent has significantly fewer example utterances defined, than the rest of the intents. This can lead to imbalanced weights when LUIS tries to match a sentence to an intent.

The portal will also tell us the prediction metrics by intents, highlighting the most problematic ones.

Selecting a problematic intent, we can select detailed view, which offers useful insights. The portal lists the score for each utterance, which represents a calculated value that LUIS predicts. It is compared to the nearest intent score, which is the top scoring intent from the set of all other intents, other than the what we expect. If the difference is below zero, that means LUIS will incorrectly predict utterances very similar to the one in question, and if it is very close it means prediction will be unclear.

Managing your LUIS app

In the manage view, we can administrate our LUIS model and its endpoints. And endpoint is used to query the service which we can do from code. You can set and query basic information about your model here, like the name, description, app ID, etc.

Since LUIS deeply integrates with Microsoft Azure, an Azure resource will be needed to use it. This can be set for different tiers, which you can read more about here. An authoring resource will also be needed for administration functions. It is important that the authoring and prediction resources share the same location (either inside USA or not), because the different LUIS portals will only accept those corresponding.

You can create multiple versions of a LUIS model which can inherit (cloned) from each other. And also select which version is active on the portal and which is published to the endpoint. This makes it easy to test new features, intents and introduce refactors without the fear of breaking something that’s working in live environment.

Integrating LUIS in a .NET environment

LUIS endpoint accepts queries over HTTPS on a specific URL which can be acquired from Manage -> Azure Resources menu. To identify valid and authorized requests it uses keys (as many other Azure resources do) that need to be supplied with the request. The query itself is given as an URL parameter “q”. An endpoint will then process the request and send back the data as JSON. A pretty basic query code would look as such:

using (var client = new HttpClient())
{
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", SUBSC_KEY);
    var response = await client.GetAsync(ENDPOINT_URL + QUERY).ConfigureAwait(false);
    var responseString = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
    var obj = JsonConvert.DeserializeObject<ResultDTO>(responseString); return obj;
}

The most important data that LUIS will return:

Note, that since V3 API, the query, its available parameters and the response format changed in some meaningful ways, for example a V3 JSON response contains all intents with their corresponding score of what LUIS thinks the likelihood of them matching to the query is. Which is great if you want to have more in-code control over the predictions or just more information for detailed logging and error handling. You can read more about the specific result formats for V2 API here and V3 API here.

And basically this is it! Setup a model class or DTO based on the response JSON documentation, send an HTTPS query to the endpoint with the user’s text (or speech-to-text converted oral input) and deserialize the response with a solution of your preference. Note, that there might be some useful repos or ready-made solutions out there for integrating LUIS into .NET (and other)
infrastructures, but I haven’t really checked them out, due to the ease-of-use, demonstrated above, but feel free to experiment with them.

And lastly for this section, I’d like to point out a few tips that might be helpful for coding with LUIS:

Testing

Even though the portal offers so much insight into how our model performs, in some cases, external testing might still be useful. It is a decision made between allocating time in small portions to analyzing and acting upon the portal’s suggestions from time to time or creating a robust test framework in code once, that can be easily expanded to cover new intents and can be run in a few seconds by a single click (okay, a few clicks in VS). But...

I’d recommend doing both of the above, for the following reasons:

Of course your mileage may vary and you might not need to make LUIS so bulletproof, or your project might not have the time needed for a testing framework, and in this case, just stick with the portal based analysis once in a while.

Anyways here are some good tips and code snippets to make such a framework… work:

public async static Task<ResultDTO> GetValidatedResult(string query)
{
    ResultDTO result = await Utility.GetIntent(text);
    
    TestQueryMatch(text, result);
    ValidateResult(result);
    
    if (VALIDATE_INTENT_SCORE)
        ValidateIntentScore(result);
        
    return result;
}

This functions gets called for all the test queries of an intent. It is only responsible for providing a valid ResultDTO. That means doing basic sanity checks, like the returned query matches the given one, the result is not null, TSI is not null, score is not 0 or TSI is not the “None” intent. Basic stuff, you can easily do with asserts and so I won’t provide those code snippets, just a basic understanding of code flow.

I recommend however to try and experiment with a TSI score threshold. This means that our test FW (and app) could refuse results that are below some TSI score. As you improve your model, you can take this threshold higher. A good baseline is 0.25. And we can make it so that the developer can turn this check off with a single bool flag.

public static void TestIntent(ResultDTO expected, ResultDTO actual, bool testGeneratedEntities = false)
{
    // Testing TSI match
    Assert.AreEqual(expected.TopScoringIntent.Intent, actual.TopScoringIntent.Intent, $"Your error message here for TSI mismatch.");
    
    if (testGeneratedEntities)
    {
        var generatedEntities = Utility.GetEntitiesFromQuery(actual.Query);
        expected.Entities = generatedEntities;
        
        TestEntities(expected, actual);
    }
}

Now comes the semantic validation. Of course we check the TSI against an expected one, but we make it optional to check all entities. This is because defining each entity for each test utterance is much harder and in some cases, unnecessary. However the testing framework supports generating these dynamically. This can be achieved using a static dictionary or in-memory dataset that the developers maintain for key entities.

And so a single unit test becomes this easy:

[TestMethod]
public async Task TurnOn()
{
    ResultDTO expected = Helper.AsResultDTO("TurnOn");
    Helper.TestIntent(expected, await Helper.GetValidatedResult("please turn on the lights in the living room"), true);
    Helper.TestIntent(expected, await Helper.GetValidatedResult("turn on all lights"));
    Helper.TestIntent(expected, await Helper.GetValidatedResult("switch on the lights in the house"));
}

Summary

We’ve seen how LUIS can help implement a natural language interface in our modern application, which is a steady trend nowadays. We’ve also learned how a LUIS model is built and what are the best practices and traps in constructing it.

Safe to say, that if used properly, natural language understanding can really boost the usability factor of our app, but it’s not a magic solution that applies to every case and environment. However, as IT enthusiasts, it’s always exciting to try out something new and I hope your app will find its new friend in LUIS.

MotionLayout: A new way to create animations on Android

Oh, no! A new animation framework for Android, again? We have quite a few already, do we really need a new one? First, let's see the previous approaches we used to create animations in our applications!

Animation solutions so far

ObjectAnimator

This subclass of ValueAnimator provides support for animating properties on target objects. We can define it in XML files:

<objectAnimator xmlns_android="http://schemas.android.com/apk/res/android"  
	android_duration="1000"  
	android_propertyName="y"  
	android_repeatCount="1"  
	android_repeatMode="reverse"  
	android_valueTo="200"  
	android_valueType="floatType" />

Or we can use it from code:

ObjectAnimator.ofFloat(view, "translationX", 100f).apply {  
	duration = 2000  
	start()  
}

ObjectAnimator uses reflection to set the property given by the name as String. Or we can avoid reflection by using the inbuilt wrapper property instead of hardcoding the property name.

ObjectAnimator.ofFloat(view, View.TRANSLATION_X, 100f).apply {  
	duration = 2000  
	start()  
}

This way the property value changes using the setter directly.

The biggest problem besides reflection is that you need a new ObjectAnimator for every view you want to animate, because it doesn't support simultaneous changes of several objects. However, there's definitely a pro for ObjectAnimator too: it can animate a property of any type.

The Animation class and its descendants

The biggest downside here is that you can animate only one property at once. You can use RotateAnimation, AlphaAnimation, ScaleAnimation, etc., to animate basic properties. And still, you need multiple instances of the Animation class for each property to play an AnimationSet. The other thing is, that it can be used only on View descendants.

ViewPropertyAnimator

Finally, we can animate multiple properties at once! 🙂 ViewPropertyAnimator was created to replace ObjectAnimator, and it can modify the defined properties simultaneously. Furthermore, it's more effective due to optimized method calls. Last but not least, the syntax is much cleaner and readable.

This is how we use ObjectAnimator to animate multiple properties, making use of an AnimatorSet:

val animX = ObjectAnimator.ofFloat(view, "x", 50f)  
val animY = ObjectAnimator.ofFloat(view, "y", 100f)  
AnimatorSet().apply {  
    playTogether(animX, animY)  
    start()  
}

And the same using the ViewPropertyAnimator framework:

view.animate().x(50f).y(100f)

ValueAnimator

ValueAnimator allows us to animate any number of objects of any type at the same time using one instance of it.

ValueAnimator.ofFloat(0f, 3f).apply {  
    duration = 3000  
    addUpdateListener { animation -> 
        view.translationX = animation.animatedValue as Float 
    }  
    repeatCount = 5  
}.start()

So, these are the most commonly used animation solutions, which are still good, but for different cases.

Let's jump into that new one.
♪Here come the Men in Black...♪ Here comes the MotionLayout.

MotionLayout

MotionLayout was introduced in 2018 at Google I/O. At the time of writing this article, it's on version 2.0.0 Beta 3.

dependencies {  
	implementation 'androidx.constraintlayout:constraintlayout:2.0.0-beta3'  
}

As we can see from the dependency, MotionLayout is a kind of ConstraintLayout, that's why it's included in the ConstraintLayout library. More precisely, it's a descendant of ConstraintLayout, that extends the parent's functionality with animation capability.

Layout structure

To get started, in the screen's layout XML file we define a MotionLayout, and we add all the necessary views to the layout. In this layout file, we constrain only those views which are not animated. The rest - which are animated - are constrained in the MotionScene file. That's where the magic (at least the definition of the animations) happens, and we'll look at it in a moment.

That's why it's important to link the MotionScene to the MotionLayout by setting the layoutDescription attribute of the MotionLayout.

<androidx.constraintlayout.motion.widget.MotionLayout
	xmlns_android="http://schemas.android.com/apk/res/android"  
	xmlns_app="http://schemas.android.com/apk/res-auto"  
	android_layout_width="match_parent"  
	android_layout_height="match_parent"  
	app_layoutDescription="@xml/motion_scene"
	app_motionDebug="SHOW_ALL">

There's also a debug option here (the motionDebug attribute) to help visualize the paths of the animated views and the progress of the animation.

Motion debug

MotionScene

The MotionScene has three important parts:

ConstraintSet

This class allows you to define a set of constraints programmatically to be used with ConstraintLayout. If we define constraints in the layout file as well as in the MotionScene, the latter overrides the former.

We have two options to define the ConstraintSet descriptors:

<ConstraintSet android_id="@+id/start">  
	<Constraint  
		android_id="@id/imageView"  
		android_layout_width="80dp"  
		android_layout_height="80dp"  
		app_layout_constraintBottom_toBottomOf="parent"  
		app_layout_constraintStart_toStartOf="parent" />  
</ConstraintSet>

<ConstraintSet android_id="@+id/end">  
	<Constraint  
		android_id="@id/imageView"  
		android_layout_width="80dp"  
		android_layout_height="80dp"  
		app_layout_constraintBottom_toBottomOf="parent"  
		app_layout_constraintEnd_toEndOf="parent" />  
</ConstraintSet>

Here we can define some basic attributes beside constraints:

If we need other attributes apart from these basic ones, we have to define a CustomAttribute like this:

<Constraint
	android_id="@+id/button"
	android_layout_width="64dp"
	android_layout_height="64dp"
	android_layout_marginStart="8dp"
	app_layout_constraintBottom_toBottomOf="parent"
	app_layout_constraintStart_toStartOf="parent"
	app_layout_constraintTop_toTopOf="parent">
		<CustomAttribute
			app_attributeName="backgroundColor"
			app_customColorValue="#D81B60"/>
</Constraint>

Transition

The Transition in the MotionScene will define how we get from one state to the other. There are two main attributes here:

app:constraintSetStart="@+id/start"
app_constraintSetEnd="@+id/end"  

These define the start and the end state of the animation. If we chose to define the constraints in layout files, we need to add the IDs of the layout files here, otherwise, the IDs of the ConstraintSets. There are also other attributes for setting the duration and the interpolator of the transition.

Inside the Transition tag we can define multiple elements to customize the transition's behavior (so, to define the animated view's movement).

<Transition  
    app_constraintSetEnd="@+id/end"  
    app_constraintSetStart="@+id/start"  
    app_duration="3000">

</Transition>

First of all, we can define two kinds of event handlers in case we want to control the animation.

<Transition  
    app_constraintSetEnd="@+id/end"  
    app_constraintSetStart="@+id/start"  
    app_duration="3000">
    
    <OnClick  
        app_clickAction="toggle"  
        app_targetId="@id/imageView" />
</Transition>
<Transition  
    app_constraintSetEnd="@+id/end"  
    app_constraintSetStart="@+id/start"  
    app_duration="3000">
    
    <OnSwipe  
      app_dragDirection="dragRight"  
      app_onTouchUp="stop"  
      app_touchAnchorId="@id/imageView" />
</Transition>

To customize the paths or attributes of the animated views, we can define KeyPositions, KeyAttributes, and CustomAttributes inside KeyFrameSet tag.

With KeyPosition we can define a specific point on the screen, and the percentage when the animation has to reach that point. This point can be defined in three coordinate systems:

The official documentation explains them in detail.

Besides path points, we can define the attributes which will be animated. For basic attributes we use the KeyAttribute tag, and set a framePosition, a motionTarget, and the animated attribute. If we need some other view parameter to animate, we can insert a CustomAttribute tag inside KeyAttribute like this:

<KeyAttribute  
	app_framePosition="25"  
	app_motionTarget="@+id/imageView">

    <CustomAttribute  
		app_attributeName="BackgroundColor"  
		app_customColorValue="@color/brand_alpha" />  
</KeyAttribute>

We have to specify the attribute's name as a string value and depending on the type of the attribute, specify the corresponding customTypeValue parameter with the value as seen above.

Motion Editor

Starting in Android Studio 4.0 Canary 1, we have a new tool to create and edit MotionLayout on a graphical user interface. This is basically the layout designer view for MotionLayout.

Motion Editor

On the left side, there's an editable preview window, the same that we know from the ConstraintLayout editor. To its right, the top half of the screen is for the visualization of the MotionLayout, showing the start and the end states we've defined (ConstraintSets) and the transition between them. Depending on what you select from these components (the selected item will be highlighted), the content will change below.

When...

Finally, we can preview the defined motion frame by frame or played back and forth.
enter image description here

Conclusion

We've taken a look at the main features of MotionLayout, the new animation framework on Android. It's the best solution for complex motion handling. In addition to describing transitions between layouts, MotionLayout lets you animate all layout properties simultaneously. The best part of it is that it inherently supports seekable transitions. This means that you can instantly show any point within the transition based on some condition, such as touch input.

So let's make our applications more spectacular with some awesome interactive animation!

*All images in the MotionEditor section are from developer.android.com

Publishing an Android library to MavenCentral in 2019

Update: An updated version of this article for early 2021 can be found here.

Introduction

Creating a library is challenging enough on its own. Coming up with the idea, implementing it, making sure you have a nice, stable public API that you maintain... That's already lots to do.

After all that, you need to make your library available to the public. Technically, you could distribute the .aar file any way you want, but the norm is publishing it to a publicly available Maven repository. It's a good idea to use one of the well-established repositories that people are already likely to have in their projects, to make getting started with your library as easy as possible.

The simplest choice would be JitPack, which might not give you much in terms of customization or control, but is very easy to get started with. All you have to do is publish your project on GitHub, and JitPack should be able to build and distribute it immediately. If you're new to libraries, this is a great choice for getting your code out there.

The next step up is Jcenter, which requires you to register a Bintray account, and request for your package to be included in the Jcenter repository. You also have to set up the Bintray publication plugin in your project, which can take some time when you're doing it for the first time. Jcenter is still to this day included by default in every new Android project, but unfortunately, it is far from perfect.

Finally, the fanciest place you can be in is The Central Repository by Sonatype, which I'll refer to as MavenCentral from here on out. This is the place to be if you're a Maven dependency. Artifacts on MavenCentral are well trusted, and their integrity can be verified, as they are all required to be signed by the author.

The publication process, however, and especially automating it, can be quite a headache. It's easy to get stuck at many of the various steps no matter what tutorials you're following, especially if they're out of date, and this can get demotivating very quickly. It's not uncommon to give up and just use Bintray/Jcenter instead.

If you do feel ready for a bit of a challenge, and want to do things the right way, here's how you can get a library into MavenCentral, in the summer of 2019.

Overview

A quick overview of the steps to go through:

Get a drink and strap in, this is gonna be a long ride.

Our setup & prerequisites

We'll be using the following tools for this tutorial. You are free to use alternatives, but these are our favourites, and they work well for us.

The last two points above make for an odd pair. We use GitHub because whether we like it or not, it's the go-to platform for open source projects. We also use GitLab CI, because that's our internal CI setup that we're used to. Thankfully, the configuration steps will be very similar for whatever CI solution you're using.

For the purposes of this article, we'll assume that you already have your library developed, and have uploaded it to a public GitHub repository.

We'll use our open source library Krate in our examples, which we've written about in one of our previous posts already. Krate is a SharedPreferences wrapper based on convenient Kotlin delegates.

Getting a repository to publish to

First things first, you'll need an account in the Sonatype Jira. Head over there and hit Sign up. Registration is straightfoward, it just requires a username, an email, and a password.

The Sonatype Jira login page

After you've logged in, you'll need to open an issue, asking for access to the group ID that you'll want to publish your project under. For us, based on our domain name, our group ID is hu.autsoft. As you'll see in a moment, it's best to choose a group ID that matches a domain that you own, otherwise you'll have to stick with having a GitHub-based group ID (see details here).

After choosing a language and an avatar, you'll end up on this landing page - click on Create an issue:

The Sonatype Jira landing page

Select Community Support - Open Source Project Repository Hosting and then New Project:

Creating a new issue, basics

On the next page, fill out the following fields:

Creating an issue, details

Soon after opening it, your issue will get a comment telling you to verify that you own the domain:

A comment asking for domain verification

To comply with this, add the required TXT record to your domain (replace the issue number with your own!):

@    TXT    1800    OSSRH-12345

When done, don't forget to leave a comment on the issue so that Sonatype knows to check the record. You'll eventually get a response telling you that you now have deploy rights - congrats!

Confirmation of the domain verification

Creating a GPG key pair

As we eluded to earlier, artifacts published on MavenCentral have to be signed by their publishers. You'll need a GPG key for this.

What we'll show you here is a GUI client, Kleopatra, which helps you manage your keys with ease. You could also generate your key from the command line using gpg, as described here, for example.

To get started, go to File -> New Key Pair...

Selecting the new key pair menu

In the first step of the wizard, choose the personal OpenPGP key pair option.

Choosing personal key pair

Fill out at least one of the name and email fields.

The name and email fields

You can also check the Advanced settings for the key that's being generated here. The defaults should be all good, but here's our settings - notably, there is no expiration date set in the Valid until field.

A glimpse at the advanced settings

A quick review of the parameters, and you're ready to generate your keys!

Parameter review page

Finally, your keys have to be sealed with a passphrase. This should be a strong, secure password that you're not using elsewhere. Whoever has access to the private key and this passphrase will be able to sign in your name.

Passphrase entry

After the key pair has been generated, you'll see the following confirmation window. This contains the fingerprint of the key pair, which you'll use to identify it. You also have the option to back up your key and to upload the public key to a public directory.

Success dialog

You should perform this upload by choosing Upload Public Key To Directory Service..., and confirming the action in the dialog that appears:

Uploading the public key to a directory

Note that if you ever want to retract this key (perhaps because your private key was exposed), Kleopatra can also generate a revocation certificate for you, which essentially acts as a kill switch by marking the public key as no longer valid. You can see this option on the Details page of the key pair.

Key pair details page

Exporting your GPG key

To sign your artifacts, you'll need to have your private GPG key handy as a file. Exporting it takes just a couple quick steps. Right click your key, and choose Export Secret Keys...

Selecting the export option

Select the destination of the exported key. It's a good idea to name it after its fingerprint - or the last couple characters of the fingerprint, the key's ID.

Choosing the export destination file

Confirm the passphrase of the key that's being exported, and you're done.

Passphrase confirmation

Setting up publication in your project

That's a lot of work without touching your project, but the time has come to do that now. You're going to add some Gradle scripts that set up the publication plugin required to push artifacts to a repository, configure the properties of the library you're releasing, and grab the necessary authentication details along with the private key you've just exported.

To start, create a new file called publish-mavencentral.gradle in a new scripts folder inside your project. All the publication logic can go here, and then you can reuse it in multiple modules if your library has multiple artifacts to publish. We'll go through the contents of this script part by part, with explanations. You can always find the complete, up-to-date file here in the Krate repository.

First, you declare the sources artifact for the library. This will make sure that the source files are packaged along with the executable, compiled code, so that your users can easily jump to the definitions that they're calling into within their IDE.

task androidSourcesJar(type: Jar) {
    classifier = 'sources'
    from android.sourceSets.main.java.source
}

artifacts {
    archives androidSourcesJar
}

You'll be making use of two plugins for the publication, maven-publish and signing. Both of these are built-in, so they don't require any new dependencies.

apply plugin: 'maven-publish'
apply plugin: 'signing'

You'll set two properties on the Gradle project itself here, the group ID and the version of the artifact. You'll see where these values come from later on, when you apply this publication script in the module level build.gradle files.

group = PUBLISH_GROUP_ID
version = PUBLISH_VERSION

Next, let's grab a whole bunch of configuration parameters. In the script below, you'll first set all the variables to a dummy empty string. This will let the project sync and build without the publication set up, which would otherwise be an issue for your contributors.

Then, you'll try to fetch the values of the variables from a local.properties file in the root of the project if it exists, otherwise you'll look for them in the environment variables. The former lets you easily input these values locally on your machine, while the latter will help with setting up CI.

The first three variables will be used to sign the artifacts after they're built:

The rest (ossrhUsername and ossrhPassword) will authenticate you to MavenCentral. These are the credentials that you've chosen for your Sonatype Jira registration.

ext["signing.keyId"] = ''
ext["signing.password"] = ''
ext["signing.secretKeyRingFile"] = ''
ext["ossrhUsername"] = ''
ext["ossrhPassword"] = ''

File secretPropsFile = project.rootProject.file('local.properties')
if (secretPropsFile.exists()) {
    println "Found secret props file, loading props"
    Properties p = new Properties()
    p.load(new FileInputStream(secretPropsFile))
    p.each { name, value ->
        ext[name] = value
    }
} else {
    println "No props file, loading env vars"
    ext["signing.keyId"] = System.getenv('SIGNING_KEY_ID')
    ext["signing.password"] = System.getenv('SIGNING_PASSWORD')
    ext["signing.secretKeyRingFile"] = System.getenv('SIGNING_SECRET_KEY_RING_FILE')
    ext["ossrhUsername"] = System.getenv('OSSRH_USERNAME')
    ext["ossrhPassword"] = System.getenv('OSSRH_PASSWORD')
}

Make sure that you've set these variables either in the aforementioned local.properties file or in your environment variables. If you want to use the property file, the syntax for it should look something like this (replace all the data here with your own):

signing.keyId=7ACB2D2A
signing.password=signingPass123
signing.secretKeyRingFile=C:/gpg-keys/7ACB2D2A.gpg
ossrhUsername=yourSonatypeUser
ossrhPassword=yourSonatypePassword

Here comes the complicated part, providing all the metadata for the library we're releasing, as well as the repository address that you'll upload it to. See the comments for the play-by-play explanation here.

publishing {
    publications {
        release(MavenPublication) {
            // The coordinates of the library, being set from variables that
            // we'll set up in a moment
            groupId PUBLISH_GROUP_ID
            artifactId PUBLISH_ARTIFACT_ID
            version PUBLISH_VERSION

            // Two artifacts, the `aar` and the sources
            artifact("$buildDir/outputs/aar/${project.getName()}-release.aar")
            artifact androidSourcesJar

            // Self-explanatory metadata for the most part
            pom {
                name = PUBLISH_ARTIFACT_ID
                description = 'A Kotlin SharedPreferences wrapper'
                // If your project has a dedicated site, use its URL here
                url = 'https://github.com/autsoft/krate'
                licenses {
                    license {
                        name = 'The Apache License, Version 2.0'
                        url = 'http://www.apache.org/licenses/LICENSE-2.0.txt'
                    }
                }
                developers {
                    developer {
                        id = 'zsmb13'
                        name = 'Márton Braun'
                        email = 'braun.marton@autsoft.hu'
                    }
                }
                // Version control info, if you're using GitHub, follow the format as seen here
                scm {
                    connection = 'scm:git:github.com/autsoft/krate.git'
                    developerConnection = 'scm:git:ssh://github.com/autsoft/krate.git'
                    url = 'https://github.com/autsoft/krate/tree/master'
                }
                // A slightly hacky fix so that your POM will include any transitive dependencies
                // that your library builds upon
                withXml {
                    def dependenciesNode = asNode().appendNode('dependencies')

                    project.configurations.implementation.allDependencies.each {
                        def dependencyNode = dependenciesNode.appendNode('dependency')
                        dependencyNode.appendNode('groupId', it.group)
                        dependencyNode.appendNode('artifactId', it.name)
                        dependencyNode.appendNode('version', it.version)
                    }
                }
            }
        }
    }
    repositories {
        // The repository to publish to, Sonatype/MavenCentral
        maven {
            // This is an arbitrary name, you may also use "mavencentral" or
            // any other name that's descriptive for you
            name = "sonatype"

            def releasesRepoUrl = "https://oss.sonatype.org/service/local/staging/deploy/maven2/"
            def snapshotsRepoUrl = "https://oss.sonatype.org/content/repositories/snapshots/"
            // You only need this if you want to publish snapshots, otherwise just set the URL
            // to the release repo directly
            url = version.endsWith('SNAPSHOT') ? snapshotsRepoUrl : releasesRepoUrl

            // The username and password we've fetched earlier
            credentials {
                username ossrhUsername
                password ossrhPassword
            }
        }
    }
}

Finally, this small piece of code tells the signing plugin to sign the artifacts we've defined above.

signing {
    sign publishing.publications
}

That's the publish-mavencentral.gradle script all built up, ready to use. Time to include it in a module! Head to the build.gradle file of your library module (in our case, this is the krate module), and add the following code:

ext {
    PUBLISH_GROUP_ID = 'hu.autsoft'
    PUBLISH_ARTIFACT_ID = 'krate'
    PUBLISH_VERSION = android.defaultConfig.versionName
}

apply from: "${rootProject.projectDir}/scripts/publish-mavencentral.gradle"

Here you finally see the group ID, artifact ID, and version being set, so that the publication script can make use of them. Then, the script itself is applied. This is all the code you need to add per-module if you are publishing your library in multiple artifacts, everything else is done by the common script.

Your first release, manually

With all of that set up, you're now ready to publish the first version of your library!

For each repository you have defined in the publishing script, a Gradle task will be created to publish to that repository. In our example, our first module to publish is krate, and we've named the repository sonatype. Therefore, we need to execute the following command to start publication (replace the module name with your own here):

gradlew krate:publishReleasePublicationToSonatypeRepository

This will create a so-called staging repository for your library, and upload your artifacts (aar and sources) to that repository. This is an intermediate step where you can check that all the artifacts you wanted to upload have made it, before hitting the release button.

To view the repository, go to https://oss.sonatype.org/ and log in. In the menu on the left, select Staging repositories.

The Sonatype menu

Scroll around the list until you find your own repository, which has your group ID in its name. If you select it and look at the Content tab, you'll see the files that have been uploaded.

List of staging repos

If you have multiple modules to publish, at this point you could keep invoking their Gradle upload tasks, and collect all the uploaded files in this staging repository. When you're done uploading files to the repository, you have to Close it. With the repository selected, hit the Close button in the toolbar. Confirm your action in the dialog (you don't need to provide a description here).

Closing your staging repository

This will take just a few moments, you can follow along with it happening in the Activity tab.

Observing the activity of the staging repository

With the repository closed, you now have two final options available to you. Drop will throw away the repository, and cancel the publication entirely. Use this if something went wrong during the upload or you've changed your mind.

Release, on the other hand, will publish the contents of your staging repository to MavenCentral. Again, you get a confirmation dialog, and you can choose Automatically Drop so that the staging repository is cleaned up after the release completes.

Releasing the staging repository

The time this process takes can vary a bit. If you get lucky, your artifact will show up on MavenCentral in 10-15 minutes, but it could also take an hour or more in other cases. Note that search indexing is a separate, even longer process, so it can take about two hours for your artifact to show up on https://search.maven.org/.

If this was your first release, you should at this point go back and comment on your original Jira issue, to let them know that your repository setup and publication is working.

Automating closing and releasing

That was quite the adventure, wasn't it? To make things smoother for subsequent releases, you can automate the entire release flow using an additional Gradle plugin.

You'll have to add a new plugin, which will perform the closing and releasing of your staging repository for you via Gradle tasks. This one does come from an external source, so add it to your dependencies in your project level build.gradle file:

buildscript {
    dependencies {
        classpath "io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.21.0"
    }
}

In this same build.gradle file, apply the plugin:

apply plugin: 'io.codearte.nexus-staging'

Next, add the following configuration to your publish-mavencentral.gradle script, anywhere after you've fetched the username and password variables. Don't forget to replace the stagingProfileId with your own:

nexusStaging {
    packageGroup = PUBLISH_GROUP_ID
    stagingProfileId = 'bcea62bcea28e7' // dummy example, replace with your own!
    username = ossrhUsername
    password = ossrhPassword
}

The packageGroup will just match your group ID again. The stagingProfileId is an ID that Sonatype assigns to you, which the plugin uses to make sure all the artifacts end up in the right place during the upload. You can find this by going to Staging profiles, selecting your profile, and looking at the ID in the URL.

Finding the staging profile ID in the URL

The plugin provides a new Gradle task that you can use to close and then release your staging repository with one simple call:

gradlew closeAndReleaseRepository

At this point, you can upload and publish your library by just invoking these two Gradle tasks in sequence - pretty convenient! As a final step, let's hook this into a CI pipeline.

Continuous integration (with GitLab)

In our case, the tool for this happens to be GitLab CI. Whatever you're using, setting up publication with it will consist of two main steps:

Most of your secret variables - for the list of these, look at the publishing script again - can simply go into protected variables, which you'll find under Settings > CI/CD > Variables within your project in the case of GitLab:

Setting secret GitLab variables

However, your private GPG key is harder to inject into the build. It needs to be present as a file, but you should never commit it into a public repository.

You could technically commit the private key into a public repository, since it is protected by its passphrase. At that point, your key is only as secure as the strength of your passphrase (see more discussion here and here). It's much more secure to keep the key entirely private.

The workaround for this is to add its contents as a secret variable, and then write those contents into a temporary file during your build. Since it's a binary file, you need to first convert its contents into text form - base 64 encoding comes to the rescue.

Convert your secret key file into base 64 with the following command (if you're on Windows, you can use a Git or Ubuntu bash for this):

base64 7ACB2D2A.gpg > 7ACB2D2A.txt

Place the contents of this file into yet another protected variable, and name it GPG_KEY_CONTENTS. You'll be writing these contents back into a file called /secret.gpg during the pipeline. Make sure that the value you're setting for SIGNING_SECRET_KEY_RING_FILE matches that path, as that's how the publication script will be able to find your private key.

We won't go into the CI job configuration in too much detail. You can always look at the full, up-to-date CI config file for Krate here.

Most importantly, this config needs to include the following steps (replace the module name with your own):

echo $GPG_KEY_CONTENTS | base64 -d > /secret.gpg
gradlew krate:publishReleasePublicationToSonatypeRepository
gradlew closeAndReleaseRepository

First, it creates the /secret.gpg file by taking the environment variable, and performing a base64 decode on it. Next, it uploads a module's artifacts to the staging repository. If you have multiple modules, invoke this task for each module. Finally, the script closes and releases the staging repository.

It's recommended that you perform these Gradle tasks in a single job, on a single machine, and it might even help if it happens in a single Gradle invocation. Otherwise you might see problems such as multiple staging repositories being created for you with your files scattered all over them. At this point, being able to look at the staging repository and manually close/drop/release repositories will come in handy to fix things up.

Conclusion

Well, that was quite a journey. We hope that this detailed guide helped you get up and running with MavenCentral publication. If you have questions, you can contact and follow us on Twitter @autsoftltd.

If you're interested in library development, we recommend that you check out this article showcasing how we've designed Krate, and this article about maintaining compatibility between library versions in Kotlin libraries. You can also read about some important security concerns we ran into when using Jcenter.

Better Custom Views with Delegates

Introduction

Reusable UI components are all the rage these days. Everyone and their dog has their own design system and their set of components that their apps are built up from - especially tech giants.

These give you several benefits:

In this article, we'll take a look at implementing custom components easily by using Kotlin's delegates. In addition to the points above, we'll also focus on one last crucial part of implementing custom components: providing your fellow developers an easy to use API.

Specification

Our example component will be a card that can display an icon, a title, and some content text. All of these three pieces of data will be customizable by clients using the component:

The component in action.

Additionally, all of these will be optional, and blank by default. The layout will adapt dynamically in case one of them is missing:

The component with some data omitted.

With that, let's jump into it!

Basic View implementation

We'll create a custom View called InfoCard. Our layout hierarchy will be the following:

The layout hierarchy of the custom View.

Our InfoCard itself will be a FrameLayout, which contains the MaterialCardView that gives us the card style that we're looking for. Inside that, a ConstraintLayout lays out our various content Views.

Why isn't InfoCard a MaterialCardView? By making it a FrameLayout that wraps the card, we can add margins to the card, which will be contained within our component. These margins are very important in this case, as without them, the edges of the card and the shadows generated by its elevation would be easily cut off. Here's a comparison of creating this component with (top) or without (bottom) these built-in margins:

A comparison of implementing the custom View with margins included or not.

So we'll need to subclass FrameLayout. Even though we need several constructors here, we'll implement them by hand, and not use @JvmOverloads.

class InfoCard : FrameLayout {

    constructor(context: Context) : super(context)
    constructor(context: Context, attrs: AttributeSet?) : super(context, attrs)
    constructor(context: Context, attrs: AttributeSet?, defStyleAttr: Int) : super(context, attrs, defStyleAttr)

    init {
        inflate(context, R.layout.view_info_card, this)
    }

}

The XML layout we're inflating into the FrameLayout is simple enough:

<?xml version="1.0" encoding="utf-8"?>
<com.google.android.material.card.MaterialCardView xmlns_android="http://schemas.android.com/apk/res/android"
    xmlns_app="http://schemas.android.com/apk/res-auto"
    android_layout_width="match_parent"
    android_layout_height="match_parent"
    android_layout_margin="4dp"
    android_orientation="vertical">

    <androidx.constraintlayout.widget.ConstraintLayout
        android_layout_width="match_parent"
        android_layout_height="wrap_content"
        android_padding="12dp">

        <ImageView
            android_id="@+id/infoCardImage"
            android_layout_width="48dp"
            android_layout_height="48dp"
            android_src="@mipmap/ic_launcher"
            app_layout_constraintBottom_toBottomOf="parent"
            app_layout_constraintStart_toStartOf="parent"
            app_layout_constraintTop_toTopOf="parent" />

        <TextView
            android_id="@+id/infoCardTitleText"
            android_layout_width="0dp"
            android_layout_height="wrap_content"
            android_layout_marginStart="8dp"
            android_text="Title"
            android_textSize="18sp"
            android_textStyle="bold"
            app_layout_constraintBottom_toTopOf="@+id/infoCardContentText"
            app_layout_constraintEnd_toEndOf="parent"
            app_layout_constraintStart_toEndOf="@+id/infoCardImage"
            app_layout_constraintTop_toTopOf="parent"
            app_layout_constraintVertical_chainStyle="packed" />

        <TextView
            android_id="@+id/infoCardContentText"
            android_layout_width="0dp"
            android_layout_height="wrap_content"
            android_layout_marginStart="8dp"
            android_text="Lorem ipsum dolor sit amet..."
            app_layout_constraintBottom_toBottomOf="parent"
            app_layout_constraintEnd_toEndOf="parent"
            app_layout_constraintStart_toEndOf="@+id/infoCardImage"
            app_layout_constraintTop_toBottomOf="@+id/infoCardTitleText" />

    </androidx.constraintlayout.widget.ConstraintLayout>

</com.google.android.material.card.MaterialCardView>

Let's review it quickly:

Placing an instance of this in our MainActivity will do the trick:

<hu.autsoft.customviewsarticle.infocard.InfoCard
    android_id="@+id/infoCard"
    android_layout_width="300dp"
    android_layout_height="wrap_content" />

Yay! Let's move on to filling it with real data.

Configuration from code

First, we'll implement the customization of the card from Kotlin/Java code, at runtime. The interface for this on the component will be provided by three properties, representing the title, content, and icon, respectively. All of these properties will be implemented by using Delegates.observable from the Kotlin Standard Library:

var title: String? by Delegates.observable<String?>(null) { _, _, newTitle ->
    infoCardTitleText.text = newTitle
}

This delegate takes a lambda as a parameter, which will be invoked every time the value of the property changes. We can use this callback to set the same value on the corresponding View. This can be viewed as a basic, manual form of data binding.

Note that these aren't just setters that will change the state of the UI. These properties have backing fields, where they actually store the data you set them to. This means that you can easily read the current String value of the title of the card, for example:

Log.d("VALUE", "Card's title is: ${infoCard.title}")

To handle hiding the views when their content is empty, we can use isVisible from android-ktx, after setting the new values:

var title: String? by Delegates.observable<String?>(null) { _, _, newTitle ->
    infoCardTitleText.text = newTitle
    infoCardTitleText.isVisible = !newTitle.isNullOrEmpty()
}

A side quest for typing

You might notice that there's a lot of typing involved in this delegate's declaration. Not in terms of hitting keys, but in terms of specifying the type of the delegate - twice. If we just omitted both declarations of String?, we'd be in trouble. The compiler would have to infer the type of the property solely from the initial value being passed in (null), which has the type Nothing?. Without going into too much detail (you can learn more about Nothing here), this would mean that we could never set the property to any value other than null!

Something else has to be done. Omitting the first type and leaving the type parameter of the observable call would work, but it places the type of the property further down the line than where it usually is, making it harder to find at a glance:

var content by Delegates.observable<String?>(null) { ... }

The other way would be much neater, declaring the type at the start of the line. However, this won't compile, as type inference unfortunately fails to propagate that type into the observable call:

var content: String? by Delegates.observable(null) { ... }

There is a fix here, however. A new type inference algorithm for Kotlin has been in the works for some time now. You can learn more about it in this video from KotlinConf 2018 and you'll also find it mentioned in the recent(ish) release notes of Kotlin 1.40. This algorithm is able to resolve types in many complex scenarios that the old one couldn't deal with - and it happens to do the trick in our situation as well.

To enable it, the following compiler flag has to be set in build.gradle:

tasks.withType(org.jetbrains.kotlin.gradle.tasks.KotlinCompile).all {
    kotlinOptions {
        freeCompilerArgs += "-XXLanguage:+NewInference"
    }
}

With this, the final version of the three properties can look like this:

var title: String? by Delegates.observable(null) { _, _, newTitle ->
    infoCardTitleText.text = newTitle
    infoCardTitleText.isVisible = !newTitle.isNullOrEmpty()
}

var content: String? by Delegates.observable(null) { _, _, newContent ->
    infoCardContentText.text = newContent
    infoCardContentText.isVisible = !newContent.isNullOrEmpty()
}

var icon: Drawable? by Delegates.observable(null) { _, _, newIcon ->
    infoCardImage.setImageDrawable(newIcon)
    infoCardImage.isVisible = newIcon != null
}

Now, using Kotlin Android Extensions, we can set these values from code simply:

infoCard.icon = getDrawable(R.drawable.ic_leave)
infoCard.title = "Time to leave!"
infoCard.content = "If you leave now, you'll be right on time for you next appointment."

Configuration from XML

Custom views are often set up from XML, using custom attributes. Let's create attributes for each of our content views, by adding the following in attrs.xml:

<?xml version="1.0" encoding="utf-8"?>
<resources>
    <declare-styleable name="InfoCard">
        <attr name="ic_title" format="string" />
        <attr name="ic_content" format="string" />
        <attr name="ic_icon" format="reference" />
    </declare-styleable>
</resources>

Before, we've put hardcoded values for all of these in our layout XML. It's a good time to remove these at this point, and use the tools: prefix for them instead:

tools:src="@mipmap/ic_launcher"
...
tools_text="Title" 
...
tools_text="Lorem ipsum dolor sit amet..."

To process the attributes we've added, we'll forward the AttributeSet received in the constructors to an initView method:

constructor(context: Context, attrs: AttributeSet?) : super(context, attrs) {
    initView(attrs)
}
constructor(context: Context, attrs: AttributeSet?, defStyleAttr: Int) : super(context, attrs, defStyleAttr) {
    initView(attrs)
}

This method will be very simple. After obtaining the attribute values for our custom View, all we need to do is set the values of our existing properties, which already know how to display this data on the UI:

private fun initView(attrs: AttributeSet?) {
    attrs ?: return

    val attributeValues = context.obtainStyledAttributes(attrs, R.styleable.InfoCard)
    with(attributeValues) {
        try {
            icon = getDrawable(R.styleable.InfoCard_ic_icon)
            title = getString(R.styleable.InfoCard_ic_title)
            content = getString(R.styleable.InfoCard_ic_content)
        } finally {
            recycle()
        }
    }
}

To use these attributes from XML, we can add the following to an InfoCard element:

app:ic_icon="@drawable/ic_ok"
app_ic_title="Success"
app_ic_content="Purchase completed. We'll prepare your package soon."

Custom delegates

If you create lots of components like this, you might want to consider extracting the logic contained in the observable delegates into your own, custom delegate implementation, so that you don't have to reimplement it every time.

There are many ways to design the API of such a delegate, especially regarding how you pass the TextView that it needs to manage to the delegate class. You can pass in the TextView itself if you're careful enough, or you can opt to pass in an ID, or a lambda that can produce the TextView...

If you don't get this right, you can face issues due to View lookups not being available at constructor time, if you haven't inflated your layout yet.

Here's one of the simpler implementations, which will ask for a TextView reference directly in its constructor. It also provides an optional Boolean parameter to control whether you want to hide the TextView when it's set to display nothing.

class TextViewDelegate(
    private val textView: TextView,
    private val hideWhenEmpty: Boolean = true
) : ReadWriteProperty<View, String?> {

    private var value: String? = null

    override fun getValue(thisRef: View, property: KProperty<*>): String? {
        return value
    }

    override fun setValue(thisRef: View, property: KProperty<*>, value: String?) {
        this.value = value
        textView.text = value
        textView.isGone = hideWhenEmpty && value.isNullOrEmpty()
    }
}

To use such a delegate, you'll need to make sure that you inflate your layout before these properties are initialized. This means placing the initializer block before the property declarations. (If you want to know more about how class initialization happens in Kotlin, read this article.) This isn't a perfect solution, but it's the simplest one, and any misuse of the delegates will show up immediately as a crash the first time you try to instantiate it.

init {
    inflate(context, R.layout.view_info_card, this)
}

var title by TextViewDelegate(infoCardTitleText)

With these changes, setting the values of properties will continue to modify the UI as before, but you don't need to specify how a String is to be bound to a TextView every time.

Conclusion

That's it! We hope you found this method of creating custom Views useful, and can adopt it in your own custom components. You can find all the code for this article on GitHub, which we encourage you to check out, and play around with. Take a look at the commit history for a step-by-step evolution of the project.

We are @autsoftltd on Twitter, where you can follow us for more content like this, or to ask questions.

To learn even more about how custom delegates work, check out this article where we discuss how we've designed our library Krate, a simple SharedPreferences wrapper.

Managing your local work in Android Studio

Modern software development relies on Version Control Systems, as many developers manage their codebase and track changes with them. Being the de facto standard of VCSs, Git is used everywhere across technologies. Not surprisingly, Android Studio (and IntelliJ IDEA under the hood) has excellent integration with Git, however this time we are going to focus on a different kind of version control, the version control of your local changes.

But wait a minute, the local repo is a full-fledged Git repository, so why we need to talk about local changes?

Well, if you have ever ...

... then I think there is room for improvement with your local change management.

Don't get me wrong, I am also guilty of all the things above. This is why I search for better approaches, and I would like to share some with you!

From local changes to remote changes

Although Git is a distributed VCS, in most cases there is a sort of central remote Git repository, often referred as origin (or the blessed repository). This is the single source of truth, as in most cases this is used by a CI tool, and all developers working on the codebase have a local copy of that repository (or a significant part of it).

When you make changes on your local code, these changes affect your working directory. This is not your local repository, just a working copy. Upon commit, these changes are saved into your local repository, only visible to you. When you push your local commits, these changes are uploaded to the remote Git repository. So technically your local repository is also a VCS on its own.

Constraints of a commit

Commits are changes in our codebase, and a commit is considered to be the smallest unit of work we produce, an atomic transformation of the source code. It transforms our code from one state to another, basically adds and removes lines. One release of the software is an ordered chain of commits, that transforms the initial state to the end product step by step. In most workflows, code that is not committed or not pushed to the blessed repository is not part of the codebase, as it will never get released.

Commits also have constraints, as they must transform a correct state of your app to another one correct state. Correct state here means that your code at least compiles and all the tests are passing - you may also have a CI to ensure this.

However, as I mentioned in the introduction, there are couple of scenarios that may look like a commit, but they cannot satisfy the above constraints. These should be managed locally by using local change management solutions.

Local history

Intellj IDEA and Android Studio have a neat feature called Local history. It tracks changes on your working directory, so it's technically not the part of Git. It also enables you to revert any changes in between commits. It even works with lines, files and folders, so it is possible to revert even the whole codebase to a specific point in time. As its name says, it is local, so this history is only visible to you. Therefore, you no longer need to commit a safe state, that for instance compiles, but isn't a complete solution yet.
To use Local history on any folder or file press right click, then Local history > Show history.

Here you see your local changes on the selected files or folders. On a previous entry, you can hit right click and Revert to revert to that state.

Git commit(s) affecting the selected file(s) are also displayed on that timeline, making it easier to navigate through it.

You can also label the current state to annotate specific versions in your local history, by right clicking on your source choosing Local History > Put label. These labels are then displayed on the timeline similarly to commits (without the Commit Changes: prefix), so they'll help you find your way back to the marked state later.

Changelists

One underutilized feature of Android Studio, and also one of my favorites is Changelists. By default, all the changes on your local working copy are part of the Default changelist, which you see in the Version Control panel(⌘+9 or ALT+9 on PC), under the Local changes tab. A changelist is a group of local changes, and it is up to you to split your changes into more changelists, the way you want to.

I recommend switching on Group by directory and Expand all on the toolbar on the left for easier navigation, but these are just my preferences - use the configuration that suits you the most.

You can create a changelist by right clicking inside Local Changes and selecting New Changelist.

Every changelist must have a name, and I suggest using something that describes it well, because if this technique clicks for you - and I hope it will - you may end up using many changelists parallel, and inactive changelists are often in a collapsed state. Also, I suggest that you use a name that is easy to address. Later we will talk about moving changes between changelists, and then you may end up typing in the name of the changelist. If you include a bug tracker number or an issue ticket ID in this name, it will make your life easier.

Changes on your local working copy are relative to your current Git HEAD,and changes being made will always be part of the currently active changelist. By default, the Default changelist is active, however when you create a new changelist, you can set it to active. Only one changelist can be active at a given time, which makes sense.

There is a very neat feature called Track context, which means that the changelist will be linked to the open editors you're using when working on that changelist. It lets you continue your work where you had left that, with the exact same opened files and cursor(s).

Although Track context tracks your open editors, it does not track recent files separately, that feature is global. As I use Android Studio without tabs as Hadi suggests, I navigate with Recent files (⌘+E, or Ctrl+E on PC) all the time, so that would be useful, but others may use it in different ways...

You can set any changelist to be the active one by right clicking on that and choosing Set to active, or by pressing Ctrl+Space (Both MAC and PC) when it's selected.

Moving changes

It's easy to move files changes between changelists: just right click on a file in the local changes tab and select Move to Another Changelist or use ⌘+⇧+M (CTRL+SHIFT+M) and select a changelist. You can also type in the changelist's name, that is why I suggest memorable names. You can also create a new changelist if one with the name you've entered doesn't exists yet. A shiny New! badge will indicate this, which also helps you make sure you didn't just make a typo :D.

That is very neat, but it gets more exciting when you start to move changes line by line, not file by file. To manage changes line by line, open the diff of your file from your local changes tab, by right clicking and choosing Show Diff or ⌘+D (CTRL+D). This will show you the changes in the file.

After right clicking on a changed line - on the content itself, and not the line numbers - you can select Move to another changelist or press ⌘+⇧+M (CTRL+SHIFT+M), and move these changes, just like you can with entire files.

One thing to keep in mind is that the IDE tracks contiguous blocks of changes (hunks) instead of individual lines, so technically, these are what you can move around. This can be confusing if two unrelated changes are next to each other, but it is more likely that changes affecting multiple lines next to each other are part of the same logical change (this is why versionCode and versionNumber changes are considered to be only one change on the screenshot).

Utilizing changelists

Committing changelists

When we have organized our changes into changelists there are multiple things we can do with them. We can of course commit changelists one-by-one. To do that, select a changelist and press ⌘+K (CTRL+K). It will pop up the commit window with the changelist selected, and by default the commit message will be the name of the change list. However, if there is a comment added to the changelist, that will be the commit message instead.

An inherent effect of thoughtful naming of changelists is that you will no longer write messages upon commit, as you have defined the scope of your commit when you named your changelist. For me, this led to much better commit messages overall.

Shelving and stashing changes

When you are in the middle of something and need to switch branches, you may want to just put away your current work for later, save that work-in-progress state somehow. As it does not feel like a real commit, and since it's inconvenient to revert back and forth repeatedly in local history, you must use something different. We have two options to handle that situation: shelving and stashing.

Shelving

Shelving lets you save changelists into a separate local storage. It is a feature of IDEA and independent of Git. You can select your changelist and hit Shelve changes, and then track shelved changes under the Shelf tab.

When a changelist is shelved its contents are not just saved, but also detached from your current work, so if you continue to work on that changelist, it won’t be tracked by that shelved saved state anymore. However, you can shelve the same changelist twice, with a different name and state. When you want to continue your work, just right click a shelved changelist under the Shelf tab, and select Unshelve… or hit ⌘+⇧+U (CTRL+SHIFT+U) when it's selected.

This is the recommended approach to handle the switch branches situation, when you need to save in-progress work.

Stashing

Stashing, on the other hand, is a feature of Git. Stashing is similar to shelving, however currently IDEA only supports stashing the whole working copy, so stashes cannot benefit from changelists.

Another major difference is that Git stash metadata is saved in the .git folder, which is not tracked by Git, while IDEA’s shelf is saved under .idea/shelf/ in a .patch format. This makes it much easier to distribute shelved changes. For switching machines or passing a work-in-progress solution to someone else, a distributed shelf could be a considerable option.

In general, I would recommend using shelf over stash! Unless you use another IDE in tandem with Android Studio, which does not support IDEA’s shelf, I cannot think of another use case where I would prefer the latter.

Pros and cons

Following the above-mentioned techniques will result in many benefits such as:

However, it will have some drawbacks to keep in mind:

Conclusions

After all, I believe these techniques will improve your code quality and team work. I have used these techniques in the past couple years for many projects, and they've proven to be very useful, so I hope you will also benefit from them!

Thanks for my coworkers for their review of this article, and your feedback is also welcome! You can reach me on Twitter at @itsbata.

Introduction to cross-platform development with Ionic

The following post is a 2-part article about cross-platform development. The first part is a brief introduction to the whole development format with its advantages and disadvantages, and a few popular alternatives. The second part is a tutorial for an expense tracker application written in Ionic (with Angular). The two parts are independently readable, but going through both gives a more exhaustive cross-platform experience. The tutorial requires basic Angular knowledge for easier understanding.

Part 1 - Introduction

Cross-platform development is the practice of developing software products or services for multiple platforms or software environments.

In recent years, this has been a topic that has been mentioned and considered by many developers. The promise of „build one codebase, run on any platform” sounds incredibly promising, both in terms of time management and expenses, so why would we code natively for every different platform?

Advantages, disadvantages

As mentioned previously, the main benefits of cross-platform development are the reduced development time and the significant cost efficiency. Other than these, there are a few other benefits of this form of development, including the ease of updating. Since there is only one application, all updates can be synchronized across all the platforms and due to this instant synchronization, deploying changes are way much easier.

Like any other promising technology, this new approach also has some drawbacks that cannot be ignored. Cross-platform applications cannot always integrate flawlessly with their target operating systems, because of the inconsistent communication between the native and non-native components. Compared to that, native code enjoys direct access to the host’s operating system and functionalities. In addition to the performance issues, designing the perfect user experience can be challenging with shared codebase, since it cannot always take advantage of the native-only features and methods.

At the beginning of the development, it is important to consider which aspects are essential for the targeted application. If it is a thick client application and flawless performance or extraordinary user interface are important parts of the application, it is better to go native. Other than that, cross-platform development may be the perfect choice for the project.

Alternatives

Since cross-platform development became so popular in the last few years, it is understandable that there are multiple frameworks dedicated to the approach. With no claim of being exhaustive, I would like to introduce a couple of popular platforms related to this topic.

Developed by Facebook, React Native is without a doubt the most popular JavaScript-to-native platform. For developers that are familiar with JavaScript (or even React), getting started is extremely easy, and the platform is well-tested, since Facebook and Instagram rely on it.

NativeScript is React Native’s biggest competitor. Both offer a similar cross-platform development experience. NativeScript lets the developer access 100% of the native APIs via JavaScript and reuse packages from NPM, CocoaPods, and Gradle.

Beloved by .Net developers, Xamarin uses a shared C# codebase and with Xamarin tools developers are able to write native Android, iOS and Windows apps with native user interfaces. It also provides access to the native APIs apart from faster development with Xamarin plugins and the NuGet package.

Ionic is arguably the most popular framework for hybrid application development. Hybrid apps run from within a native application and its own embedded browser (WKWebView on iOS, WebView on Android). Mainly it is because it allows developers to use the well-known and Angular framework (while React and Vue integration is also coming to the framework). It also comes with a powerful CLI which provides an amazingly simple way to create, code, test and deploy Ionic apps to any platform. It is built on top of Apache Cordova, which provides access to native API’s like camera, bluetooth, fingerprint authentication, GPS and so on (but in the future, Ionic is about to change default container to a new Native API Container called Capacitor that makes it easy to build web apps that run on iOS, Android, Electron, and on the web as PWAs, with full access to native functionality on each platform).

Part 2 – Getting started with Ionic

Getting started with Ionic is incredibly easy with the help of its impressive CLI.

First steps

First, if it is not done yet, install NodeJs for node package manager. Then install Cordova and Ionic with npm:

npm install -g cordova ionic

Then create an Ionic project with the following:

ionic start

After this command, enter the name chartapp for the project, and also choose starter template. Since we will create a simple one-page application, hit enter on blank.

After the project generation, the CLI will ask us if we want to install the Ionic Appflow and connect our app. Choose no (n), the Appflow SDK offers a CI/CD platform and other amazing tools, but we do not need them this time.

When the generation process is done, we can change directory to our freshly created project and run the app with the following:

cd chartapp
ionic serve

Firstly, it does not look like a mobile application, but in modern browsers we can easily access mobile device view (e.g. in Chrome DevTools Ctrl/Command+Shift+M on Windows/Mac).

After these few easy steps, we can open our project in our favorite code editor, and everything is ready to create an awesome cross-platform application.

Expense tracker application

The main purpose of this tutorial is to get an insight into cross-platform development with Ionic while creating a simple mobile application that allows us to track our everyday expenses.

In this application the user will be able to enter the amount and choose the type (Taxes, Food, Transportation, Entertainment, Clothing or Other) of the spending. After tapping on the + button, the given input will appear on a spectacular doughnut chart. Tapping on the bin button deletes the content of the chart.

chartapp

Development

The generated project structure looks very similar to a typical Angular project. The src/index.html file is the main entry point for the app, though its purpose is to set up scripts, CSS includes, and bootstrap, or start running our app. As usually in Angular projects, most of our code goes to the files of the src folder. For a more in-depth view, the official project structure description can be sufficient.

Add the following attributes to the HomePage class in the src/app/home/home.page.ts file. These are variables for the chart for the expenses, the amount of money spent, and the id of the selected expense from the expenses array. The expenses array also contains the name, the already spent amount and the displayed color of every expense. We will use the inputFocused variable to see if the expense amount input field is in focus or not.

doughnutChart: any;
moneySpent: number;
selectedExpenseId: number; 
inputFocused: boolean = false;

expenses = [
  { id: 1, name: 'Taxes', amount: 0, color: '#FFEB3B' },
  { id: 2, name: 'Food', amount: 0, color: '#E91E63' },
  { id: 3, name: 'Transportation', amount: 0, color: '#2196F3' },
  { id: 4, name: 'Entertainment', amount: 0, color: '#4CAF50' },
  { id: 5, name: 'Clothing', amount: 0, color: '#F57C00' },
  { id: 6, name: 'Other', amount: 0, color: '#BDBDBD' }
  ];
Chart (a small non-Ionic part)

Let’s start the development process with the creation of the doughnut chart. This part is not related to Ionic, but we will be working on the dataset of the chart in the following segments. Even though this part is not about Ionic components, it is interesting to see how fluently Angular libraries and directives work in Ionic.

In the project folder, install and save ChartJS (for the charting library) and ng2-charts (for the easy Chart.js integration in Angular) into the project with the following command:

npm install ng2-charts chart.js --save

In the home.page.ts file import Chart from Chart.js:

import { Chart } from 'chart.js';

After the import, we can create and configure a new Chart and display it on the related view in the HomePage class:

  createChart() {
    this.doughnutChart = new Chart('doughnutChart', {
      type: 'doughnut',
      data: {
        labels: this.expenses.map(e => e.name), // mapping the names from the expenses array
        datasets: [{
          data: this.expenses.map(e => e.amount),
          backgroundColor: this.expenses.map(e => e.color),
          borderWidth: 2
        }]
      },
      options: {
        legend: {
          labels: {
            usePointStyle: true // only for prettier labels
          }
        }
      }
    });
  }

Make your HomePage class implement OnInit, and call the createChart method in the ngOnInit function.

import { Component, OnInit } from '@angular/core';

// ...
export class HomePage implements OnInit {

// ...

  ngOnInit() {
    this.createChart();
  }

In the src/app/home/home.page.html file, change the content of the element to the canvas:

<ion-content>
  <canvas id="doughnutChart" height="300" width="300"></canvas>
</ion-content>

(You cannot see the chart with zero money spent, but temporarily changing the amounts in the expenses array can do the trick.) Also, you can change the content of the to any title, for example “Expenses”.

Adding data

Add the following methods to the HomePage class in the home.page.ts file. This method should be called every time the amounts in the expenses array change:

  refreshChartData(): void {
    this.doughnutChart.data.datasets[0].data = this.expenses.map(e => e.amount);
    this.doughnutChart.update();
  }

This method finds the chosen expense in the expenses array by id and adds the entered amount to it:

  addDataToChart(expenseid: number, spent: number) {
    let index = this.expenses.findIndex(item => item.id === expenseid);
    this.expenses[index].amount += spent;
    this.refreshChartData();
  }

This method does validation for the money spent and the selected expense, and if there is not any selected expense or the amount entered is wrong or missing, it presents a toast. A toast provides simple feedback about an operation in a small popup, and it shows the given message and disappears automatically after the given duration time (in ms):

  async addData() {
    if (this.moneySpent > 0 && this.selectedExpenseId) {
      this.addDataToChart(this.selectedExpenseId, this.moneySpent);

      this.moneySpent = null;
      this.selectedExpenseId = null;
    } else {
      const toast = await this.toastController.create({
        message: 'Wrong or missing data!',
        duration: 1500
      });
      toast.present();
    }
  }

To make the toast work, we need to import ToastController from ‘@ionic/angular’. ToastController is a component used to create Toast components:

import { ToastController } from '@ionic/angular';

And then add it to the constructor:

constructor(public toastController: ToastController) { }

Add the following elements to the ion-content under the chart in the home.page.html file. The Ionic input component is a wrapper to the HTML input element with some custom styling and additional functionality. This input is bound to the moneySpent attribute. The event given in ionBlur/ionFocus is emitted when the input loses/acquires focus, and for aesthetic reasons, the floating action buttons will not appear when the focus is on the input field:

  <ion-item>
    <ion-label position="stacked">Amount of money spent:</ion-label>
    <ion-input type="number" [(ngModel)]="moneySpent" placeholder="Enter amount" 
    (ionFocus)="inputFocused = true" (ionBlur)="inputFocused = false"></ion-input>
  </ion-item>

Selects are form controls to select an option, or options, from a set of options, similar to a native select element. When a user taps on the select, a dialog appears with all of the options in a list. In this select the user can choose an expense from the expenses array:

  <ion-item>
    <ion-label>Type of expense</ion-label>
    <ion-select [(ngModel)]="selectedExpenseId" placeholder="Select type">
      <ion-select-option *ngFor="let e of expenses" [value]="e.id">{{e.name}}</ion-select-option>
    </ion-select>
  </ion-item>

A floating action button (FAB) is a circular button that triggers action in the app’s UI. FABs should be placed in a fixed position that does not scroll with the content. FABs mostly contain an icon and Ionic provides a huge set of IonIcons to use in FABs (and anywhere else).

  <ion-fab vertical="bottom" horizontal="end" slot="fixed" *ngIf="!inputFocused">
    <ion-fab-button (click)="addData()">
      <ion-icon name="add"></ion-icon>
    </ion-fab-button>
  </ion-fab>

After creating the previous methods and adding the elements to the the home page should look and work like this:

Deleting data

In this section, the data deleting functionality is implemented. Pressing the delete button will reset the displayed expenses to zero. For example, this could be used monthly to restart the expense tracking.

Add the 2 following methods to HomePage class in home.page.ts. The first method iterates through the expenses array and sets the amounts to zero.

  deleteData(): void {
    for (let index = 0; index < this.expenses.length; index++) {
      this.expenses[index].amount = 0;
    }
    this.refreshChartData();
  }

This method presents an alert and asks if the user wants to delete the data. An alert is a dialog that presents users with information or collects information from the user using inputs. As it can be seen, there are two buttons presented on the alert: No button for canceling the deletion, and Yes button for calling the deleteData() method.

  async presentDeleteAlert() {
    const alert = await this.alertController.create({
      header: 'Delete',
      message: 'Are you sure you want to delete all your expenses?',
      buttons: [
        {
          text: 'No',
          role: 'cancel',
          cssClass: 'secondary'
        }, {
          text: 'Yes',
          handler: () => {
            this.deleteData();
          }
        }
      ]
    });
    await alert.present();
  }

To make alerts work, similarly to the toast, we need to add AlertController to the imported elements from ‘@ionic/angular’:

import { ToastController, AlertController } from '@ionic/angular';

Also add it to the constructor:

constructor(public toastController: ToastController, public alertController: AlertController) { }

Add the floating action button for deleting data to the in home.page.html:

  <ion-fab vertical="bottom" horizontal="start" slot="fixed" *ngIf="!inputFocused">
    <ion-fab-button color="danger" (click)="presentDeleteAlert()">
      <ion-icon name="trash"></ion-icon>
    </ion-fab-button>
  </ion-fab>
Saving the data

At this stage of the tutorial we have a completely working application, but every time we close the application, we lose the data since our app does not store the expenses. For this purpose, Ionic provides an easy way to store key/value pairs and JSON objects. Ionic Storage uses a variety of storage engines underneath, picking the best one available depending on the platform.

In the terminal install the cordova-sqlite-storage plugin:

ionic cordova plugin add cordova-sqlite-storage

Then install the package:

npm install --save @ionic/storage

Next, add it to the imports list in your NgModule declaration in src/app/app.module.ts:

import { IonicStorageModule } from '@ionic/storage';
  // ...
  imports: [
    BrowserModule, 
    IonicModule.forRoot(), 
    AppRoutingModule, 
    IonicStorageModule.forRoot()
  ],
  // ...

After the previous steps, you can inject Storage into the HomePage:

import { Storage } from '@ionic/storage';

In the HomePage class add Storage to the constructor, and create a storageKey attribute that will serve as a key, while the expenses array will be the value:

  storageKey: string = 'expenses';

  constructor(public toastController: ToastController, public alertController: AlertController, public storage: Storage) { }

Finally, create the methods for saving and loading the expenses array in the HomePage class:

  saveData() {
    this.storage.set(this.storageKey, JSON.stringify(this.expenses));
  }

  loadData() {
    this.storage.get(this.storageKey).then((val) => {
      if (val) this.expenses = JSON.parse(val);
  this.createChart();
    });
  }

Change the method called in ngOnInit() from createData() to loadData(), and the expenses array will be loaded from Storage after initialization:

  ngOnInit() {
    this.loadData();
  }

Also add the saveData() method to the addData() and deleteData() methods to always save changes:

  async addData() {
    if (this.moneySpent > 0 && this.selectedExpenseId) {
      this.addDataToChart(this.selectedExpenseId, this.moneySpent);

      this.moneySpent = null;
      this.selectedExpenseId = null;
      this.saveData(); // saving added
    } else {
      const toast = await this.toastController.create({
        message: 'Wrong or missing data!',
        duration: 1500
      });
      toast.present();
    }
  }

  deleteData(): void {
    for (let index = 0; index < this.expenses.length; index++) {
      this.expenses[index].amount = 0;
    }
    this.refreshChartData();
    this.saveData(); // saving added
  }

Now you have a fully functioning expense tracker application with a fancy doughnut chart that displays all the entered spendings compared to each other.

Deploying the app

At this point, we have a fully functioning application and it would be cool to generate a release build and maybe even publish the application to App Store or Play Store. I suggest using these simple official guides to help us through the whole process:

To install and set up the required development kits, the official guide provides instructions (the red notes depending on your operating system):
https://ionicframework.com/docs/v1/guide/installation.html

To the publishing process, this guide explains the process entirely (with key generation, apk signing, etc.):
https://ionicframework.com/docs/v1/guide/publishing.html

Summary

Conclusion

Choosing between native and cross-platform development can be tricky. With applications that require high performance, native applications always win, since they can be designed to use the resources of the device entirely, while hybrid apps are not optimized for one platform directly.

At the start of the project, the developers need to consider priorities and the chances of future features and changes, and make the big decision based on them.

Going with cross-platform development, I can completely recommend Ionic with its beautiful and smooth UI components, and the fact that it is based on Angular can be super beneficial, since most of front-end developers have some experience with it.

Other useful references

A more detailed comparison between cross-platform development tools: https://www.outsystems.com/blog/free-cross-platform-mobile-app-development-tools-compared.html

About the Ionic and Angular lifecycle: https://ionicframework.com/docs/lifecycle/angular

Chart.js samples for different chart types: https://www.chartjs.org/samples/latest/

Using the keyboard plugin from Cordova instead of detecting when the input field is in focus is also a good and maybe more sophisticated solution: https://github.com/ionic-team/cordova-plugin-ionic-keyboard

Ionic Studio for faster and smoother development: https://ionicframework.com/studio