Object Tracking tracks a SINGLE object!
It is not possible to track multiple objects.
Also, since the training data for each algorithm is separate from other algorithms, it is not possible to use one algorithm's LEARNED data in another algorithm.
Firmware V0.5.1 and above allow detection of multiple occurrences of the same colors. While all these same color objects will have the same Object ID number, their location parameters will be different, as well as detection count displayed in the INFO block will tell you how many of them have been detected.
https://github.com/HuskyLens/HUSKYLENSArduino/tree/master#enum-protocolalgorithm
https://github.com/HuskyLens/HUSKYLENSArduino/blob/master/HUSKYLENS/HUSKYLENS.h
huskylens.writeAlgorithm(ALGORITHM_LINE_TRACKING); //Switch the algorithm to line tracking.
In Object Recognition mode there is no way to differentiate various different objects of the same category, eg: person, bike, etc. Additionally, in its initial mode, all the 20 objects recognized do NOT have representative Object IDs assigned and all return Object ID=0.
To remedy the problem, you need to train minimum a SINGLE object for each of the 20 categories and assigned them unique Object IDs 1-20. eg:
aeroplane=1, bicycle=2, bird=3, boat=4, bottle=5, bus=6, car=7, cat=8, chair=9, cow=10, diningtable=11, dog=12, horse=13, motorbike=14, person=15, pottedplant=16, sheep=17, sofa=18, train=19, TV=20.
Alternatively, you can assign totally arbitrary numbers to each category, as long as each one is unique and not shared with any other.
Once that is done, any type of object detected in this mode will return a BLOCK Info data that has the category Object ID as the last field. Keep in mind, the object training image you use for the category has nothing to do with the actual detection and only serves the purpose of assigning an Object ID to the category. You need to make sure you pick an image that is of the category type and is identified ass the name of the category in the the detection process, ie: do NOT train a bicycle category with the picture of a tree !!!
Looking the number up and matching to a table you create will tell you what type of object you have detected.
Once you are happy with the results, you can save the Object Recognition data to the SD card and your Object IDs will be preserved for any future use by simply reloading that training data for the algorithm category.
Make sure to power the HL using the USB port from a separate 5V power source. Some algorithms of the HL draw a lot of current and cause a reset if powered by the Serial interface cable alone.
I don't know the exact type of microController you are using, but there are two ways to go here:
Using I2C:
You can connect multiple devices to an I2C interface, as long as they are individually addressed. In this mode, each can share the I2C bus and work together. However, to accomplish the multiple device connectivity to the same I2C pins, you need a multi-port connector that allows you to do this. There are GROVE type I2C splitters from M5: Pa.HUB . This approach may require a cable conversion if the microcontroller platform used does not have any Grove type connectors; but it is not a complicated process.
Using UART:
You can have the HL connect and operate in 9600 baud UART mode, while your DF2301Q device uses the I2C pins to connect.
This is possible , but a bit tricky to execute. There is no straight-forward single step to do it. But here is a workflow that will allow you to update your trained files in stages.
You will need an SD card to store interim trained model data.
First thing to remember is that LEARNED objects get object id's assigned in the order of the learning. Each new person has to be assigned a different number in sequential manner.
Start with:
*learn multiple set to Yes,*“forget all learned objects”: https://github.com/HuskyLens/HUSKYLENSArduino/blob/master/HUSKYLENS%20Protocol.md#command_request_forget-0x37
When HL returns block info, you can optionally request it using the INFO details sent before the detected blocks:
https://github.com/HuskyLens/HUSKYLENSArduino/blob/master/HUSKYLENS%20Protocol.md#command_return_info-0x29
First pieces of data in this INFO result is “number of blocks detected” and “number of learned blocks”.
Then for each block detected, there will be a result data with coordinates. And at the end of each entry there will be an “object ID”. If this number is 0 (zero), object is NOT learned; and if it greater than 0 (zero), it is the LEARNED object number.
https://github.com/HuskyLens/HUSKYLENSArduino/blob/master/HUSKYLENS%20Protocol.md#command_return_block0x2a
When you start training in stages, it is MOST crucial to keep track of this LEARNED object number and make sure you assign it incrementally for each learned object.
For a simple exercise, follow the workflow:
* forget all learned objects.
* learn a face and assign a LEARNED object number (1) to it.
* save this 1 object model data to the SD card as model #1.
* clear all learned object info in the program. Or for a drastic but fast method, factory reset the camera.
* load the model #1 from SD card. This will restore all LEARNED info into Face Recognition.
* learn an ADDITIONAL face and assign a LEARNED object number (2) to it.
* save this 2 object model to the SD card as model #1, overwriting the previous one.
At this point, you should have a saved model #1 on SD card with two LEARNED faces; and you did not have to do the learning back to back. Also, while in this exercise, you trained the camera first on one face and then created the backup file, you can actually train it on as many faces as you want, before creating the interim backup.
Then when you reset everything and reload the backup file, you will restore all the previously saved faces. The thing to be cautious about when you start the second round of training is, to make sure you monitor the LEARNED count and start incrementing from it, rather than back to one.
Due to the nature of the environment, you may find it hard to implement this flow in an Arduino environment.
I use MicroBlocks (https://microblocks.fun/) for the camera coding and its real-time interactive nature allows much more flexibility to achieve this result. Check out the WIKI: https://wiki.microblocks.fun/en/extension_libraries/huskylens .
You cannot train HL with photos stored on the internal SD Card while running the program code.
Is your Monitor interface speed set to the same value as the Serial.begin(115200) ?
Motor Shield Pins: D C G V
HuskyLens Pins: T R - +
HuskyLens Cable: green blue black red
connect them as they are shown above matching the cable colors.
There is no method to save HL pictures to an external media. Best that can be done is to save it to the internal SD card and then remove it and read it in an external reader (RPI).
re: huskylens can make multiple algorithm at the same time such as line tracking and tag recognition?
I was under the impression that MULTIPLE algorithms simultaneously was NOT POSSIBLE !
@Bard17: Is it possible to change from object classification to tag recognition then get back to object classification?
Yes, you can switch algorithms back and forth programmatically. Just allow enough time for the camera to adjust to the new settings.
This was among the suggestions made a while ago. Unfortunately not implemented yet.
In the meantime, if an object id : Name association is needed, then a user implemented dictionary is necessary.
This is not an answer but rather a possibility based on how things work.
These answers assume that the camera will be positioned in a fixed manner and will have a steady view of the area to be analyzed.
To recognize P R N D with a color background or frame around the selected letter is a matter of training the model.
Ignoring the letters, if there is a significant color change involved with the gear selections, then color recognition algorithm might be helpful.
Objects recognized are returned with block center coordinates, which can be correlated to the position of the letters P R N D; hence the need for a fixed mounting.
Still, it will take a bit of fiddling and training to test it all out. Hope it will work out.
I tried the HUSKYLENS_ADVANCED.ino with Arduino.
Camera was powered off of the 5V pin and ground, I2C was off of pins SCL and SDA on the board next to the reset button.
With the camera starting out with FW: HUSKYLENSWithModelV0.5.1aNorm.kfpkg and reset to factory default,
I got the same display as you did. This is normal as there are no objects recognized or learned.
Then I manually selected the Object Classify algorithm and pressed the learn button to make it learn an object. It got assigned ID1.
I set the void loop object selections to:
if (huskylens.requestBlocks(ID1)) //request blocks tagged ID == ID1 from HUSKYLENSenabled.
When I reran the Arduino program, I got the correct results, as below:
⸮###################################Count of learned IDs:1frame number:201#######Get all blocks and arrows. Count:1Block:xCenter=160,yCenter=112,width=224,height=224,ID=1#######Get all blocks. Count:1Block:xCenter=160,yCenter=112,width=224,height=224,ID=1#######Get all arrows. Count:0#######Get all blocks and arrows tagged ID0. Count:0#######Get all blocks with learn ID equals ID0. Count:0#######Get all arrows tagged ID0. Count:0#######Get all blocks and arrows tagged ID1. Count:1Block:xCenter=160,yCenter=112,width=224,height=224,ID=1#######Get all blocks and arrows tagged ID2. Count:0
I hope this will provide confirmation that the demo program and camera work as intended.
Your deviations could be due to other reasons:
- FW level- Power issues- Code related etc.It is important to select the correct object selection criteria in the void loop if-statements.
With the camera in Face Recognition algorithm, I switched my if selection to :
if (huskylens.requestArrows()) //request only arrows from HUSKYLENS
and ended up with nothing displaying due mismatch of algorithm and selected and detected objects:
###################################Count of learned IDs:0frame number:564#######Get all blocks and arrows. Count:0#######Get all blocks. Count:0#######Get all arrows. Count:0#######Get all blocks and arrows tagged ID0. Count:0#######Get all blocks with learn ID equals ID0. Count:0#######Get all arrows tagged ID0. Count:0#######Get all blocks and arrows tagged ID1. Count:0#######Get all blocks and arrows tagged ID2. Count:0
Also note that the learned object count is per algorithm, as can be seen in my displays, where I had a learned object in Object Classify, but asked for Arrows in Face recog algorithm and got 0 results.
Check out the HuskyLens Arduino library section and Protocol description in Github.
It has byte sequences for the entire command set.
Please ignore this post. It is not a problem.
Hello GMartin,
I do not use Mind+, but I am familiar with it.
There is a block in the Husky extension that returns the “number of learned blocks”.
This number is per algorithm, and you should be able to find out the info you are looking for in it.
Hope it helps.
Hello George,
Just replying in support of the problem you have described.
I am experiencing the same thing and other anomalies with the way objects are handled.
I tried documenting them here in hopes of an answer; however, the support staff is totally unresponsive
to HuskyLens issues. It is like they dumped the product and don't care about it anymore.
The current firmware level and its performance details leaves much to be desired for any meaningful application.