Friday, January 30, 2026

Easy AI with MicroPython

For an index to all my stories click this text

This story tells how to use a free (no sign-in) API to connect to an AI system with MicroPython. The program is tested both on a Raspberry Pi Pico W and Pico 2W. But it should run on an ESP8266 or ESP32 with MicroPython equally well.

Like I said in previous stories on this weblog: I love playing with AI.
In one of the previous stories I wrote how to install a LLama (a complete AI system) on a Raspberry Pi. You can read that story here:
https://lucstechblog.blogspot.com/2026/01/run-ai-models-local.html

That works great but the Llama's (Language models) are limited because the memory of the Raspberry Pi is limited. The AI systems we access through our browser like ChatGPT, Copilot, Gemini, Grok etc. are multi million dollar systems with hundreds of Gigabytes (GB) memory.

And then I found a very simple API that allowed connecting to one of the cloud based large models.
The APi is so simple that I could use it with MicroPython and with Javascript.

This story tells how to use it with MicroPython. The next story shows how to use it with Javascript.

The API

The API call looks like this:

http://nimaartman.ir/science/L1.php?text=:HERECOMESTHEQUESTION

If you want to try it (and fill in some real question) you can just put it in your browsers URL and press enter.

As you can see, after "text=:" comes the actual question. But there are no spaces allowed in the question. So our program needs to filter them out.


This works in your browser however the answer looks like this:
data    "🤔💭 What’s your question? 🚀✨"

It is JSON coded with emoticons and other characters.
I want to filter these out, so our program needs to take care of that too.

So here is the complete program in MicroPython.

'''
program to get data from an ai system
'''

import network
import urequests
import ujson

# Router credentials
ssid = "YOUR-ROUTERS-NAME"
pw = "Routers-PAssword"
print("Connecting to wifi...")

# wifi connection
wifi = network.WLAN(network.STA_IF) # station mode
wifi.active(True)
wifi.connect(ssid, pw)

# wait for connection
while not wifi.isconnected():
    pass

# wifi connected
print("Connected. IP: ",str(wifi.ifconfig()[0], "\n"))

question = "Translate 'twee' from dutch to german"
question = question.replace(' ','')

url = "http://nimaartman.ir/science/L1.php?text="
url = url + question

# Send the GET request
response = urequests.get(url)

try:
    # Attempt to parse the JSON response
    response_str = response.text
    data = ujson.loads(response_str)

    # Extract the text field from the JSON data
    text_with_unicode = data.get("data")

    ## Clean the unicode text
    cleaned = ''.join(ch for ch in text_with_unicode if ord(ch) < 128)
    print(cleaned)

    response.close

except ValueError:
    # Handle the case where the response is not valid JSON
    print("Error: The response is not valid JSON or the server returned an error message.")
    print("Raw response2:", response.text)
    response.close


Let us have a look at some parts of the code.

The program starts with iporting the necessary libraries (that are included in the MicroPython instalation).

ssid = "YOUR-ROUTERS-NAME"
pw = "Routers-PAssword"

Don't forget to replace YOUR-ROUTERS-NAME and Routers-Paswword with the credentials of your router.

The program then connects to the internet and shows the IP Number the microcontroller got from the router.

question = "Translate 'twee' from dutch to german"
question = question.replace(' ','')

The question I used as a first test is to translate 2 (two) into German.
The next line replaces all spaces with an empty string and so effectively removes the spaces. This is needed for the API to work as discussed earlier.

url = "http://nimaartman.ir/science/L1.php?text="
url = url + question

This is where the API call is constructed.

# Send the GET request
response = urequests.get(url)

The urequests library sends the API call and gets an answer which is stored in the variable response

    # Attempt to parse the JSON response
    response_str = response.text
    data = ujson.loads(response_str)

    # Extract the text field from the JSON data
    text_with_unicode = data.get("data")

The first part decodes the JSON code in the response variable and puts that in the data variable.

The second part gets the answers actual text. It is put in the text_with_unicode variable as it still contains all the unicode characters like emoticons etc.

    cleaned = ''.join(ch for ch in text_with_unicode if ord(ch) < 128)
    print(cleaned)

Now this looks coplicated but in fact really isn't.
We start with and empty string. In that we put all the characters (ch) if their ascii value is less then 128 (ord(ch) < 128)

And then we print the cleaned text.

Copy the code from this page and paste it in Thonny. Save it with an appropriate name and run the code.

Some examples.


Here is what I received when I asked to translate the Dutch word "twee" into German


And here I asked to translate "hello" into Japanese.


And here I asked what the book with the title "This perfect day" is about.
The description is brief but correct while not giving away the plot !!!

Small side-note from me: it is about a society that is run by a supercomputer that makes all decisions for humans.
Written by Ira Levin and highly recommended !!


I guess these are enough examples.

If you have a developed a nice project with this: let me know.

Till next time
Have fun

Luc Volders

Friday, January 23, 2026

Solving the CORS error with Javascript

For an index to all my stories click this text.

This story shows how to beat the dreaded CORS error in Javascript.

What is a CORS error.

When you open a webpage or try to fetch information from a web service that page or service comes from a specific origin.

For example.
- https://lucstechblog.blogspot.com/  is a specific origin
- https://www.wikipedia.org/ is another origin
- https://hackaday.com/blog/ is yet another origin

Web browsers protect their users by enforcing something called the Same-Origin Policy. The definition of this policy is:

“A web page can only make requests to the same origin it came from —
unless the other server explicitly allows it.”


So if you are running a webpage on your computer and it tries to access a page on the web for fetching some information you can get an error like this:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://example.com/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 200.

That is the CORS error (CORS = Cross-Origin Resource Sharing)

And your webpage might just do nothing.  To see this error you will have to open the developer console in your browser.

To avoid this happening the server that sends the information to you needs to put a special header on it's webpage:

Access-Control-Allow-Origin: *

Without this header the server blocks responses for safety.

What is the purpose

There are several purposes to block a cross origin request. But one of the obvious ones is to prevent web-scrapers to get information from a webpage. A user can open the webpage in his browser but any fetch commands that wnt to get informtion will not work.

Simply said: humans can visit but but other sites can’t programmatically fetch and read them.

Cors error in real life

Let's have a look at a simple webpage on the internet that is made for testing purposes: example.com


As you can see this page opens just fine in your web browser.

And here is a demo webpage that tries to fetch information from the website example.com

<!doctype html>
<html>
<body>
  <h2>CORS Demo  Destined to fail</h2>

    Target URL:<br>
    <input id="target" size="30" value="https://example.com/">

    <br><br>

    <button id="fetchBtn">Try to fetch data</button>

  <h3>Output:</h3>
  <pre id="output">Click the button...</pre>

<script>
document.getElementById("fetchBtn").onclick = function() {
  const target = document.getElementById("target").value;
  const output = document.getElementById("output");
  output.textContent = "Fetching directly from: " + target + "\n\n";

  fetch(target)
    .then(r => r.text())
    .then(t => output.textContent += "✅ Unexpected success:\n\n" + t)
    .catch(e => output.textContent += "❌ Expected CORS error:\n" + e);
};
</script>
</body>
</html>

Nothing too complicated here.
the fetch command tries to access the https://example.com/ webpage and puts the response in the output field.


This is how the webpage looks.


And this is what is displayed on the page when the button is clicked.


I opened the web-developers menu and here you can clearly see the cors error in the console.

Just like said before: humans can visit but but other sites can’t programmatically fetch and read them.

But how to prevent this..............

There is a website with the name https://allorigins.win/

Simply said: you send your fetch command to that site. They wrap it up in such a way that the cors error is avoided and send the answer back to you.

Here is a simple webpage that demonstrates this:

<!doctype html>
<html>

<body>
  <h2>CORS Demo  With AllOrigins<br>that works</h2>


    Target URL:
    <br><br>
    <input id="target" size="30" value="https://example.com/">

    <br><br>
    <button id="fetchBtn">Fetch the data</button>


  <h3>Output:</h3>
  <pre id="output">Click the button...</pre>

<script>
document.getElementById("fetchBtn").onclick = function() {
  const target = document.getElementById("target").value;
  const encoded = encodeURIComponent(target);
  const proxy = "https://api.allorigins.win/get?url=" + encoded;
  const output = document.getElementById("output");

  output.textContent = "Fetching via AllOrigins...\n\n";

  fetch(proxy)
    .then(r => r.json())
    .then(data => {
      const snippet = data.contents.substring(0, 500);
      output.textContent += "✅ Success via AllOrigins!\n\n" + snippet + "\n\n... (truncated)";
    })
    .catch(e => output.textContent += "❌ Proxy fetch failed:\n" + e);
};
</script>
</body>
</html>

The page looks and acts the same as the previous one. So let us have a quick look at the changes that makes the magic happen.

  const target = document.getElementById("target").value;
  const encoded = encodeURIComponent(target);
  const proxy = "https://api.allorigins.win/get?url=" + encoded;

When the button is clicked the target website (https://example.com/) is put in the target variable and URI encoded.
Next this encoded value is added to theAllOrigins api and put in the variable proxy.

  fetch(proxy)
    .then(r => r.json())
    .then(data => {
      const snippet = data.contents.substring(0, 500);
      output.textContent += snippet + "\n\n... (truncated)";
    })

The constructed api (the variable proxy) is fetched from the website and the result is put in the variable snippet (the first 500 characters) and then put in the output field.


Here is the webpage that achieves this again.



And this is what we get when we press the button. The required information is indeed obtained. Just what we needed.

Of course this is just a stupid example. But there are real world situations where this can be invaluable. I have an example coming up in a future story.

Sidenote

The discussed HTML pages contain Javascript code. Javascript is easy and you get immediate results in your browser: so on your computers screen. Javascript code will work on Firefox as well on Chrome on PC's and Raspberry's equally well. It is cross-compatible to most systems.
But Javascript is also very extensive and there are a lot of nice tricks to accomplish things. I bundled more than 500 tips and tricks to address programing problems in a neat book. Available from Amazon world wide:



Click here to get more info or buy the book.

Caveats ??

Well there are some things to consider when using the AllOrigins api.
First it is an external service you are using. And as we have seen in the past they might just pull the plug or start charging money for their service.

But it is a foss (free and open source) service. You can get the code on:
 https://github.com/gnuns/allorigins 
and there you can even find how to install AllOrigin on your own server.
If you are going to use this frequent I urge you to do so. A small Raspberry Pi might do the trick although I have not tested that yet.

Another thing of concern is that you are sending data to a webservice. So never send fetch commands that contain sensitive information like passwords or bank details etc. These might get compromised.

There are a kazillion websites and services out there. So I can not say if this works on each service on the internet. So test before you build an actual project with this.

But for now I have solved one of my problems with this method.

Till next time
have fun

Luc Volders 



Friday, January 16, 2026

Run AI models local

For an index to all my stories click this text.

It is almost unbelievable that the first AI, which was ChatGPT, was released to the public on 30 November 2022. That is (at the moment of this writing) a bit over 3 year ago !!
According to Wikipedia in January 2023 ChatGPT had become the fastest-growing consumer software application in history, gaining over 100 million users in two months. That's just 2 months after the release.

Soon several other AI's have emerged like: Copilot, Claude, Deepseek, Gemini, Grok and Meta.ai
You can click any of the above mentioned AI models and get directed to their webpage where you can try them out.
All these AI systems run on large multi processor computers somewhere in the cloud and cost billions of dollars. And the good part is that most of them can be used for free !!!
You can access these models through your web-browser or by dedicated apps on your smartphone.

Meta.ai is part of the Meta family of applications like Whatsapp, Facebook and Instagram. And Meta has partially made their model open source. This made it possible for companies and individuals to build their own AI systems. And when you look around on the internet you will indeed find several AI systems which are build for specific purposes. There are AI's available for writing and editing text, programming, medical aid and of course porn.

These models are called Llama's. The name Llama stands for Large Language Model Meta AI. In short called LLM's (Large Language Models). These models need hundreds of gigabytes to run.

But some users started to experiment and soon small models were released. And now there are even models that run on the humble Raspberry Pi.
And as I love playing with AI I had to give it a go.

Ollama

Ollama is a website where you can search for LLM's and download them.

To get an AI running on your Pi you need a program. That program is freely available and is called Ollama. You can find it on the Ollama website:

https://ollama.com/

Installing Ollama

Please note that this will only work on a Raspberry Pi4 or Pi5 with at least 8Gb memory.

To install Ollama I advise to start with a new and fresh Raspberry installation.
Preferably start a new install with a full OS install and then use raspi-config to reboot the system in konsole mode. That way the Raspberry has as much free memory available as possible.
For a refresh course on how to setup your Raspberry Pi look here:
https://www.raspberrypi.com/documentation/computers/getting-started.html

Then do an update and upgrade with:

sudo apt update

sudo apt upgrade

Now your Raspberry is fully operational and up to date, you can install Ollama.

To install Ollama enter the next code (just copy and paste):

curl -fsSL https://ollama.com/install.sh | sh

Running this will take some time.


On my Raspberry Pi5 with 8Gb it took about a minute.


After a while the installation finishes.
The last line shows that Ollama did not detect a GPU from NVIDIA or AMD.
This means that all computations will be done by the Raspberry's processor.
A GPU would speed up the processing of commands a lot, but alas I do not have one (yet).

This is just the framework that is needed to process the LLM's.
So the next step is to load the LLM itself.

Finding the right LLM

You need to find LLM's that are smaller then your Pi's memory.
LLM's will be stored on your SD card but can not be run from them.
They need to be loaded into the memory of the Pi to run, otherwise processing is to slow.
So the LLM's you need to look for need to be smaller as the memory of your pi.
In my case they therefore need to be smaller as 8gb.


Start with visiting the Ollama website:

https://ollama.com

In the top right corner of the site click on Models and you'll get directed to a webpage where all LLM models are listed.

As I have played a bit with different Llama's I know from experience that Gemma is a real good start.

Gemma3 is the latest model but unfortunately it is to big for my 8Gb Pi. But Gemma 2 is just right. So at the top of the page in the search function fill in: Gemma

The one we are looking for is gemma2 and that's the second one that comes up.

Click on the name and a screen with the details of the available models shows up.


The smallest model is gemma2:2b. That is 1.6Gb in data and just not "educated" enough. The largest model is gemma2:27B That one is the smartest but needs 16Gb. It would (I presume) fit in a Raspberry Pi5 with 16GB but alas not on my 5Gb version.

So what we need is gemma2:9b which occupies just 5.4Gb and would fit my 8Gb Raspberry Pi5.

At the top of the page it says: ollama run gemma2
That is the general line to start this LLM with ollama. But we need specific the 9b version. So the line should be:

ollama run gemma2:9b

Running the found LLM

Now switch over to the Pi's konsole screen and type the command:

ollama run gemma2:9b


This takes a while because the first time you are going to use this Llama it needs to get downloaded from ollama into your SD card. On my Pi5 this took about 10 minutes.
The next time you run this model it is faster because it is already downloaded to your SD card and only needs to load into the Pi's memory.

The line at the bottom: >>> Send a message (/? for help)
indicates that the model has loaded into your Pi's memory and is ready for usage.

You can type your questions direct after the three arrows (>>>)

The first test.

Let's try something simple. Let's ask the AI for Pi with 50 decimals.


And there it is. The figures are put on the screen one by one and the answer took about a minute to complete.

Here is another example.


I asked gemma to explain gravity in 5 lines of text, and it did a great job.
If you would omit the 5 lines of text limit you would get an enormous amount of text including Newtons laws, Einsteins theories etc. Trust me I have tried it.


I also asked gemma to explain gravity to me and act like I am 5 year old. And above is the answer.

Can it program in MicroPython

I wanted to see if gemma could help with some programming my Raspberry Pi Pico. So I asked it to write a small program that would blink the internal led for 10 seconds on and 10 seconds off.


And what do you say to that.

Stopping ollama

If you want to stop a conversation just use the clear command like this:

>>>/clear

And if you want to quit ollama use /exit like this

>>>/exit

This last command brings you back to the system prompt.

Only gemma2 ???

Well no, you can load as many Llama's as you like as long as there is room on your SD card and they are in size smaller as the memory of your Pi.
But as they use up all the Pi's memory you can not run more as one model at the time.

I have for example been playing with Deepseek-r1:8b which was no success.

But dolphin-llama3:8b works great. This model is more language oriented and is uncensored.
You can run it with:

ollama run dolphin-llama3:8b

Please note that just like gemma2 this model has to download first so that takes some time.

And at this moment my favorite is mistral-openorca:7b
You can run this with:

ollama run mistral-openorca:7b

To test how much memory there is left on your SD card use df.


The example shows that my SD card is 64Gb and there is still about 39Gb available.
So I can download a lot of other models to play with.

If you want to know what models already have been downloaded on the sd card just use

ollama list


As you can see I have downloaded 5 different models. And I still have room left on my SD card for experimenting with other models.

Concluding.

Running AI on your Pi is simple and fun to play with.
Just use models that are well below 8Gb.
And remember the larger the models the more educated they are.
So you can not compare a local 5Gb model to Chatgpt or Gemini that runs on billiondollar computer systems.
Nevertheless it's fun to play with, and costs nothing more as a simple SD card.

Oh and did I say it is fun to play with ???
Really, give it a try yourself, and be amazed.

Till next time
have fun

Luc Volders

Friday, January 9, 2026

Perform a daily task

For an index to all my stories click this text.

What's this story about.

This story shows how to get the accurate time from an NTP server and use that to perform a daily task, every day at the same time. The program is written in MicroPython and will work on a Raspberry Pi Pico W as well as on an ESP32 or ESP8266.

A daily task ??

You can use this to set the coffee machine on every day at the same time so you'll have a fresh cup of java every morning when you wake.
Another option is to water your plants every day at the same time when you are on a holiday. Or use this to build an automatic fish feeder that feeds the fish once a day. You could even make an alarm clock that wakes you every day. Plenty of tasks you can use this for. Just use your imagination.


Actually I wrote this story because I got a question on my Discord server from one of my readers, who wanted to know if I found a way to execute a daily task by using the NTP server. As you may have noticed I am no longer on Discord. if you want to reqach me, please do so by mail.

CHAT_GPT

My Discord server had a CHAT-GPT section where you could ask questions to CHAT-GPT. So I asked CHAT-GPT to write a MicroPython program that achieved this. And CHAT-GPT came up with this:


Besides the fact that this code is not going to retrieve the NTP time in any way, there are some issues with it.

- The rp2040 has a RTC in which you can set hours, minutes, seconds, day, month, and year, then read back the current time later. Unfortunately the Pico board clock oscillator is rated at about 30 ppm. Since there are 86400 seconds/day, this means a deviation of up to 2.6 seconds a day. That does not look much however is about 16 minutes a year.

- I showed in a previous story that the NTP time actually is the UTC time which does not take in account your timezone or Dailight Savings Time (DST). 

Corrections

To get this working we need to make several corrections:

- Get the Actual UTC time from an NTP server
- Adjust the time for your timezone
- Adjust that time for DST (Dailight Savings time)
- Do the above every day to correct the internal clock

After these steps we can check for a certain time (hour and minutes) for the task to start.

Timezone and DST

First thing I did is creating a function that uses a timezone as a parameter. That function looks like this:

def settime(timezone):
    global local_time
    rtc = machine.RTC()

    # Set the time from NTP server
    ntptime.settime()

    time.sleep(2)
    # Get the local time
    local_time = time.localtime()

    #adjust for timezone Netherlands
    local_time = time.localtime(time.mktime(local_time) + (timezone*3600))

    # Adjust for DST if necessary
    if is_dst_europe(local_time):
        local_time = time.localtime(time.mktime(local_time) + 3600)

    return

This function depends on another function to adjust the retrieved time for the European DST (Dailight saving time).

# Function to check if DST is in effect
def is_dst_europe(t):
    year, month, day, hour, minute, second, weekday, yearday = t
    print(t)
    # Last Sunday in March
    #dst_start = max(week for week in range(25, 32) if time.localtime(time.mktime((year, 3, week, 1, 0, 0, 0, 0, 0)))[6] == 6)
    dst_start = 0
    for day in range(25, 32):
        if time.localtime(time.mktime((year, month, day, 1, 0, 0, 0, 0, 0)))[6] == 6:
            dst_start = day


    # Last Sunday in October
    #dst_end = max(week for week in range(25, 32) if time.localtime(time.mktime((year, 10, week, 1, 0, 0, 0, 0, 0)))[6] == 6)
    dst_end = 0
    for day in range(25, 32):
        if time.localtime(time.mktime((year, month, day, 1, 0, 0, 0, 0, 0)))[6] == 6:
            dst_end = day

    start = time.mktime((year, 3, dst_start, 1, 0, 0, 0, 0, 0))
    end = time.mktime((year, 10, dst_end, 1, 0, 0, 0, 0, 0))
    now = time.mktime(t)

    return start <= now < end

To call these functions use this line:

settime(1)

Adjust the 1 for your own timezone.


Test if a new day has started

Like described in the beginning of this story the Pico's internal clock has a deviation of about 3 seconds a day. To correct that we will have to the correct time once a day. Here is the code that checks if a new day has started.

# Function to get the current date
def get_current_date():
    current_time = time.localtime()
    return current_time[0], current_time[1], current_time[2]  # year, month, day

# Initialize the previous date
previous_date = get_current_date()

while True:
    current_date = get_current_date()

    # Check if the day has changed
    if current_date != previous_date:
        print("A new day has started!")
        previous_date = current_date

What this code does is checking if the year, month and day of the previous_day variable are equal to the current_date varaiable. If that is not the case then a new day has begun and the previous_date is set to the current_date so the cycle starts anew.


Start a task every day at the same time

Now that we have gotten the exact time we can use that to start our task every day as the same time.

import time

# Function to perform the daily task
def daily_task():
    print("This task runs every day at 10:00 AM")

# Set the target time for the task
target_hour = 10
target_minute = 0

while True:
    current_time = time.localtime()
    current_hour = current_time[3]
    current_minute = current_time[4]

    # Check if the current time matches the target time
    if current_hour == target_hour and current_minute == target_minute:
        daily_task()
        # Wait for a minute to avoid running the task multiple times within the same minute
        time.sleep(60)

    # Sleep for a short while before checking the time again
    time.sleep(1)


This part is easy. We check the current hour which is the 4th entry in the current_time tuple (number 3). Then we check the current minutes which is the 5th entry (number 4). Then these are compared to the hour and time we defined as the starting time. If these are the same the task is started.

The complete program.

All in all this is a lot of code to get a task starting every day at the same time. Below is the complete program in which all the above functions are combined together with the code for accessing the internet.

The NTP server library can be obtained from my previous story which you can find here: https://lucstechblog.blogspot.com/2023/04/getting-right-time-with-micropython.html

import network
import ntptime
import time
import machine

# Set your router credentials
ssid = "YOUR-ROUTERS-NAME"
pw = "PASSWORD"

# At what hour and minutes should the task start
target_hour = 22
target_minute = 47

# set the local time to the internal time
# to initialise the local_time variable
local_time = machine.RTC()

# Set the timezone
timezoneadjust = 1

# Start the wifi connection
wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect(ssid, pw)

# wait for connection
print('Waiting for connection.',end="")
while wifi.isconnected() == False:
    time.sleep(1)
    print('', end='.')
print("")

ip = wifi.ifconfig()[0]
print("Connected with IP adress : "+ip)

time.sleep(1)

# set the time from the server
ntptime.settime()

# set date for date checking
current_time = time.localtime()
previous_date = current_time[0], current_time[1], current_time[2]

# ===============================================================
# Function to check if DST is in effect
def is_dst_europe(t):
    year, month, day, hour, minute, second, weekday, yearday = t
    # print(t)
    # Test last Sunday in March
    dst_start = 0
    for day in range(25, 32):
        if time.localtime(time.mktime((year, month, day, 1, 0, 0, 0, 0, 0)))[6] == 6:
            dst_start = day

    # Last Sunday in October
    dst_end = 0
    for day in range(25, 32):
        if time.localtime(time.mktime((year, month, day, 1, 0, 0, 0, 0, 0)))[6] == 6:
            dst_end = day

    start = time.mktime((year, 3, dst_start, 1, 0, 0, 0, 0, 0))
    end = time.mktime((year, 10, dst_end, 1, 0, 0, 0, 0, 0))
    now = time.mktime(t)

    return start <= now < end

#===========================================================
def settime(timezoneadjust):
    # Get the local time
    local_time = time.localtime()

    #adjust for timezone Netherlands
    local_time = time.localtime(time.mktime(local_time) + (timezoneadjust*3600))

    # Adjust for DST if necessary
    if is_dst_europe(local_time):
        local_time = time.localtime(time.mktime(local_time) + 3600)

    return (local_time)

#===============================================================
def testday():
# Function to get the current date
    global previous_date

    current_time = settime(timezoneadjust)
    current_date = current_time[0], current_time[1], current_time[2]  # year, month, day

    # Check if the day has changed
    if current_date != previous_date:
        print("A new day has started!")
        previous_date = current_date
        ntptime.settime()
    else:
        print("Still the same date")
    return


#=================================================
# Start of the actual program
while True:
    current_time = settime(timezoneadjust)
    print("Adjusted time:", current_time)
    testday()
    # print("in the while",current_time)
    current_hour = current_time[3]
    print(current_hour)
    current_minute = current_time[4]
    print(current_minute)
    # Check if the current time matches the target time
    # Then here comes the daily task
    if current_hour == target_hour and current_minute == target_minute:

        print ("Here comes the daily task")

        # Wait for a minute to avoid running the task multiple times within the same minute
        time.sleep(60)

    time.sleep(10)

You can copy this code and paste it in Thonny's editor to transfer it to your microcontroller,

In this example the time for the daily task is set at 22:47. And you need to change the code at where it says: Here comes the daily task to fill in the task you want to have performed.

Expansion

By adding multiple target hours and minutes and use multiple tests in the main part of the program you can use this for scheduling multiple tasks in one day.

Till next time
Have fun

Luc Volders