After failing to run my mobile AI agent on a Pinephone, I decided to shift device to continue that experiment. Since I will not get an Android device, I decided to shift my focus to my iPhone. Unfortunately, this requires a Mac and I have been a harcore Windows user in my private life pretty much since I started coding.
Still it was with a heavy heart, after years of flirting with the dark side that I finally embraced it. While I have a Mac mini that I haven’t touched in years, I need a robust daily driver that would be more lightweight than my massive gaming PC.
As a result, on Tuesday night, I ordered a MacBook from my phone and collected it during my lunch break the next day.
I carried on with work before celebrating my wife’s birthday with some delicious beef wellington in town.
Now, I need to rewrite that C#/Avalonia app using Swift which likely changed since the last time I wrote an iOS app back in 2018.
Building the UI
I found that the first vibe coded the SwiftUI interface. As you may have noticed from my last entry I am open to vibe coding when it comes to implementation details. At least to the presentation related ones like view rendering.
I simply asked Grok to convert the Avalonia XML file I wrote in the previous entry into SwiftUI and the code read so simply that it felt like overkill to use an AI agent here.
Still, within seconds I had my UI running as expected in the simulator.
Now onto the business logic.
Local model integration
I spent quite some time trying to reuse my ONNX model before taking a step back. The ONNX solution came from the need to run inference on a Pinephone but maybe there is existing support to do so for iOS.
After some googling I stumbled onto CoreML then tried to convert the ONNX model to CoreML. This looked to hard to make sense. Why would it be that hard to do so while trying to use the Swift Package Manager? Why are the Cocoapods alternatives point at libraries that have all been archived?
The answer was in front of me the whole time. Apple already sorted it out. No need for an ONNX runtime library, no need for manually quantising my embedded language model. All I had to do was skip through this tutorial and notice that I can just use the NaturalLanguage
library.
From there I can grab the sentiment with a few lines of code. Here is my whole sentiment detector code in Swift which is basically a copy/paste of the sentiment detection from this tutorial:
//
// Scorer.swift
// MobileSentimentDetector
//
// Created by Jean-Dominique Nguele on 21/05/2025.
//
import Foundation
import NaturalLanguage
class SentimentDetector {
let tagger = NLTagger(tagSchemes: [.sentimentScore])
func score(_ text: String) -> Double {
var sentimentScore = 0.0
tagger.string = text
tagger.enumerateTags(
in: text.startIndex..<text.endIndex,
unit: .paragraph,
scheme: .sentimentScore,
options: []) { sentimentTag, _ in
if let sentimentString = sentimentTag?.rawValue,
let score = Double(sentimentString) {
sentimentScore = score
return true
}
return false
}
return sentimentScore
}
If that sentimentScore
result is positive then so is the sentiment.
Time for the demo
In order to make sure that no external API is called under the hood I decided to swift my device to airplane mode after installing this demo app on it.
The one oddity I found is that This is good
is seen as negative but every other test input yielded the expected result.
Enjoy the video and see you next time!