diy solar

diy solar

Getting Data DIRECTLY from a Tigo TAP - is it possible ?

Thanks! This was very helpful with the help of Google Translator.

I was able to get in and implant a cron job to export live data into influxdb. Been working fine for two days and after rebooting.

Next I'll integrate with Home Assistant and OpenEVSE. Fun!
Awesome. Let me know how you end up visualizing it in HA.

How are you running influxdb?
 
Awesome. Let me know how you end up visualizing it in HA.

How are you running influxdb?

I'm thinking of two different ways to do this. Haven't settled on one yet.

Option 1: Use the InfluxDB integration for Home Assistant to write a HA sensor that queries influxdb on an interval.

Option 2: Use telegraf to query InfluxDB on an interval that posts the "live" values to an MQTT topic.

I may just implement both options for my use cases. The MQTT option gives my EV charger the direct capability of diverting a percentage of the generated PV to charge my car. I suppose I could do it all in Home Assistant as well.

I run InfluxDB on one of my ARM devices. That's where all my sensor data resides. With option 2 I could output to multiple destinations, one being the free InfluxDB cloud service for redundancy.

Edit: I just realized I didn't answer your question of how I'll visualize in HA. When you create a sensor you can use it on the built-in Energy dashboard. I'll just link the sensor to the Solar Generation section. Should be easy. 😎
 
I'm thinking of two different ways to do this. Haven't settled on one yet.

Option 1: Use the InfluxDB integration for Home Assistant to write a HA sensor that queries influxdb on an interval.

Option 2: Use telegraf to query InfluxDB on an interval that posts the "live" values to an MQTT topic.

I may just implement both options for my use cases. The MQTT option gives my EV charger the direct capability of diverting a percentage of the generated PV to charge my car. I suppose I could do it all in Home Assistant as well.

I run InfluxDB on one of my ARM devices. That's where all my sensor data resides. With option 2 I could output to multiple destinations, one being the free InfluxDB cloud service for redundancy.

Edit: I just realized I didn't answer your question of how I'll visualize in HA. When you create a sensor you can use it on the built-in Energy dashboard. I'll just link the sensor to the Solar Generation section. Should be easy. 😎

I guess I could probably run the influxdb on the same Pi 4B as I am running HA? Otherwise, I have a Windows box that mostly just runs Plex.

I have data from Solar Assistant piped into the HA Energy dashboard. (Someday I'll probably replace SA with direct monitoring of Sol-Ark using RS485).
I was trying to figure out if there was a way to visualize all of the panels individually, but I'm still trying to wrap my head around YAML.
 
I guess I could probably run the influxdb on the same Pi 4B as I am running HA? Otherwise, I have a Windows box that mostly just runs Plex.

I have data from Solar Assistant piped into the HA Energy dashboard. (Someday I'll probably replace SA with direct monitoring of Sol-Ark using RS485).
I was trying to figure out if there was a way to visualize all of the panels individually, but I'm still trying to wrap my head around YAML.

Oh I see. You're asking about visualizing each panel in HA. Yeah not something I want to do in HA. I intend to visualize that in grafana as a next step.

Probably an overlay of panel values over an image of my roof? I dunno. Maybe that would be too tacky.
 
Last edited:
oof. That was both easier and harder than I expected.
I just installed InfluxDB into HA. This makes it easy to pass data from HA to InfluxDB, but makes it harder to get data directly into the InfluxDB instance.
With the help of ChatGPT, I created a bash script to parse
Code:
/mnt/ffs/data/daqs
and then pass the info for each optimizer to HA using the REST API (which can be called from curl).
Once in HA, I can pass the info to InfluxDB.

This worked!
But it was made more complicated because:
  • Apparently this thing runs BusyBox and doesn't have a complete version of grep, so the initial parsing method didn't work.
  • For... reasons... crond is using a file that is NOT where crontab looks. So crontab -e doesn't edit the right thing.
  • I created HA sensors for each optimizer, that then have attributes for power, voltage, etc. Although this is cleaner, I haven't figured out how to get InfluxDB to understand this (yet).
 
oof. That was both easier and harder than I expected.
I just installed InfluxDB into HA. This makes it easy to pass data from HA to InfluxDB, but makes it harder to get data directly into the InfluxDB instance.
With the help of ChatGPT, I created a bash script to parse
Code:
/mnt/ffs/data/daqs
and then pass the info for each optimizer to HA using the REST API (which can be called from curl).
Once in HA, I can pass the info to InfluxDB.

This worked!
But it was made more complicated because:
  • Apparently this thing runs BusyBox and doesn't have a complete version of grep, so the initial parsing method didn't work.
  • For... reasons... crond is using a file that is NOT where crontab looks. So crontab -e doesn't edit the right thing.
  • I created HA sensors for each optimizer, that then have attributes for power, voltage, etc. Although this is cleaner, I haven't figured out how to get InfluxDB to understand this (yet).

Yes, normal crontab editing will not work. Simply edit the crontab located at /mnt/ffs/etc/crontab.

Here is the script named data-to-influxdb.sh that I use to push data to my standalone InfluxDB instance:

Bash:
#!/bin/sh

INFLUXIP="192.168.0.200:8086"
ORG="homelab"
BUCKET="tigo-pv-stats"
TOKEN="MySecretToken"

DATA=$(getinfo --dir /mnt/ffs/data/daqs --prefix daqs. | tr '\n' ',')
NEWDATA=$(echo $DATA | cat | tail -c +34 | head -c -9 | sed 's/=\,/=0\,/g' | sed 's/\,TimeStamp=/\ /g')
NEWDATA=$NEWDATA"000000000"

curl -XPOST "http://$INFLUXIP/api/v2/write?org=$ORG&bucket=$BUCKET" --header "Authorization: Token $TOKEN" --data-raw "Tigo-ACC $NEWDATA"

My crontab has this entry:

Code:
* * * * * /mnt/ffs/bin/data-to-influxdb.sh

Hope this helps.
 
Last edited:
Yes, normal crontab editing will not work. Simply edit the crontab located at /mnt/ffs/etc/crontab.

Here is the script named data-to-influxdb.sh that I use to push data to my standalone InfluxDB instance:

Bash:
#!/bin/sh

INFLUXIP="192.168.0.200:8086"
ORG="homelab"
BUCKET="tigo-pv-stats"
TOKEN="MySecretToken"

DATA=$(getinfo --dir /mnt/ffs/data/daqs --prefix daqs. | tr '\n' ',')
NEWDATA=$(echo $DATA | cat | tail -c +34 | head -c -9 | sed 's/=\,/=0\,/g' | sed 's/\,TimeStamp=/\ /g')
NEWDATA=$NEWDATA"000000000"

curl -XPOST "http://$INFLUXIP/api/v2/write?org=$ORG&bucket=$BUCKET" --header "Authorization: Token $TOKEN" --data-raw "Tigo-ACC $NEWDATA"

My crontab has this entry:

Code:
* * * * * /mnt/ffs/bin/data-to-influxdb.sh

Hope this helps.
Thanks!

I saw similar code on the German forum. I tried to figure out what they were trimming and why they were padding the end with a bunch of zeros and gave up when it seemed like I couldn't pass the data directly into InfluxDB that was running inside HA.

I did figure out the attributes. They show up as "fields".
 
@Mr.Hyde or @darkmode, any chance you can summarize the process for getting into the CCA? I'm struggling navigating photovoltaikforum.

I guess the real question is the password; do you need to manually scan the CGI for a unique password, and does it change periodically? I assume you have yours disconnected from the internet.

My understanding: within the first two hours of the unit starting, connect to the CCA with a curl request, and use a cron job to keep the connection alive.
 
Last edited:
@Mr.Hyde or @darkmode, any chance you can summarize the process for getting into the CCA? I'm struggling navigating photovoltaikforum.

I guess the real question is the password; do you need to manually scan the CGI for a unique password, and does it change periodically? I assume you have yours disconnected from the internet.

My understanding: within the first two hours of the unit starting, connect to the CCA with a curl request, and use a cron job to keep the connection alive.
Mine was connected for days before I got control of it.

DOING THE FOLLOWING MAY VOID YOUR WARRANTY

First thing is to connect to
Code:
http://[cca ip address]/cgi-bin/shell
Username:
Code:
Tigo
Password:
Code:
$olar
I had to try several browsers before that worked.

Then you should be able to ssh into it using
Username:
Code:
root
Password:
Code:
gW$70#c

Then you want to remount the file system:
Code:
mount -o remount,rw /

The device is locked down to only respond to certain IP addresses. So we need to fix that.
Code:
echo "/usr/sbin/iptables -t nat -D INPUT -p tcp --dport 80 -j SNAT --to 10.11.1.1" >> /etc/rc.httpd

Code:
echo "/usr/sbin/iptables -t nat -I INPUT -p tcp --dport 80 -j SNAT --to 10.11.1.1" >> /etc/rc.httpd

Reboot.

Then you should be able to access
Code:
http://[cca ip address]/cgi-bin/gwfwupui
which as all the info.

I then added a bash script to push the information to Home Assistant using curl and added this as a cron job.

Mine still has access to the internet, though I have disabled OpenVPN.
 
Last edited:
Mine was connected for days before I got control of it.

DOING THE FOLLOWING MAY VOID YOUR WARRANTY

First thing is to connect to
Code:
http://[cca ip address]/cgi-bin/shell
Username:
Code:
Tigo
Password:
Code:
$olar
I had to try several browsers before that worked.

Then you should be able to ssh into it using
Username:
Code:
root
Password:
Code:
gW$70#c

Then you want to remount the file system:
Code:
mount -o remount,rw /

The device is locked down to only respond to certain IP addresses. So we need to fix that.
Code:
echo "/usr/sbin/iptables -t nat -D INPUT -p tcp --dport 80 -j SNAT --to 10.11.1.1" >> /etc/rc.httpd

Code:
echo "/usr/sbin/iptables -t nat -I INPUT -p tcp --dport 80 -j SNAT --to 10.11.1.1" >> /etc/rc.httpd

Reboot.

Then you should be able to access
Code:
http://192.168.1.49/cgi-bin/gwfwupui
which as all the info.

I then added a bash script to push the information to Home Assistant using curl and added this as a cron job.

Mine still has access to the internet, though I have disabled OpenVPN.
You are a rockstar, thanks!!
 
Mine was connected for days before I got control of it.

Code:
echo "/usr/sbin/iptables -t nat -D INPUT -p tcp --dport 80 -j SNAT --to 10.11.1.1" >> /etc/rc.httpd

Mine doesn't have any iptables chains according to 'iptables -L', more specifically not a 'nat' table. As a result when I run the above (with my CCA's IP address instead) I get 'iptables: No chain/target/match by that name.'


Mine still has access to the internet, though I have disabled OpenVPN.

There's an OpenVPN client on there? I don't see one running and don't really see any config for one.

I wonder if I have a newer version (or older version). I just bought the whole mess a couple months ago.

Is this what you're using ( https://github.com/rp-/tigo-exporter/tree/master ) or did you do your own thing? I see a bunch of csv's so I'm thinking of just curl'ing those somewhere using cron and processing them either in HA or dropping the data into my own mysql or something.

Edit: It'd be cool if we could convince Pierre to parse them in SolarAssistant and we can just push them to its MQTT server.
 
Mine doesn't have any iptables chains according to 'iptables -L', more specifically not a 'nat' table. As a result when I run the above (with my CCA's IP address instead) I get 'iptables: No chain/target/match by that name.'
I'll have to check mine when I get home. I'm sure that's the command I ran, but I can poke around a bit. I can also see if I can pull some version info.

There's an OpenVPN client on there? I don't see one running and don't really see any config for one.
That was a suggestion from the German forum to ideally prevent unwanted firmware updates... but not sure if it actually will do that.
I think I found the binary by using "locate"

In my brief looking it didn't seem like there was a good way to do MQTT from it. So I use curl to push the data to Home Assistant as sensor data using the Home Assistant API.
(please excuse the novice bash scripting)

Code:
output=$(getinfo --dir /mnt/ffs/data/daqs --prefix daqs.)

# Function to send data to Home Assistant
send_to_hass() {
    entity=$1
    data_json=$2
    #curl -X POST \
    #     -H "Authorization: Bearer $TOKEN" \
    #     -H "Content-Type: application/json" \
    #     -d "{\"state\": \"active\", \"attributes\": $data_json}" \
    #     "$HOST/api/states/sensor.tigo_$entity"
    echo "{\"state\": \"active\", \"attributes\": $data_json}" \
         "$HOST/api/states/sensor.Tigo_$entity"
}

# Parsing values and sending them as one sensor with multiple attributes
for id in 1 2 3 4 5 6; do
    prefix="LMU_A${id}"
    
    Iin=$(echo "$output" | grep "${prefix}_Iin=" | sed "s/.*${prefix}_Iin=\([^ ]*\).*/\1/")
    Pin=$(echo "$output" | grep "${prefix}_Pin=" | sed "s/.*${prefix}_Pin=\([^ ]*\).*/\1/")
    Pwm=$(echo "$output" | grep "${prefix}_Pwm=" | sed "s/.*${prefix}_Pwm=\([^ ]*\).*/\1/")
    Temp=$(echo "$output" | grep "${prefix}_Temp=" | sed "s/.*${prefix}_Temp=\([^ ]*\).*/\1/")
    Vin=$(echo "$output" | grep "${prefix}_Vin=" | sed "s/.*${prefix}_Vin=\([^ ]*\).*/\1/")

    # JSON data of attributes
    attributes_json="{\"Iin\": \"$Iin\", \"Pin\": \"$Pin\", \"Pwm\": \"$Pwm\", \"Temp\": \"$Temp\", \"Vin\": \"$Vin\"}"

    # Send data to Home Assistant
    send_to_hass "$prefix" "$attributes_json"
done

I'm still not sure if sending them as one sensor per panel with multiple attributes is correct vs sending each parameter from each panel as its own sensor.
 
I'll have to check mine when I get home. I'm sure that's the command I ran, but I can poke around a bit. I can also see if I can pull some version info.

I was going to get some version information from mine but now I'm having pretty consistent problems getting ssh'd into it. I have ssh enabled, nmap tells me the port is open but filtered but when I try ssh'ing, it takes forever to establish the connection, then eventually establishes it and then waits for the remote end to initiate the handshake. This morning (early) it went right through immediately. Ping's are nice and quick/solid, and it's connected via ethernet (not wifi).... Weird.

That was a suggestion from the German forum to ideally prevent unwanted firmware updates... but not sure if it actually will do that.
I think I found the binary by using "locate"

I didn't think Busybox had locate but was going to check that as well except see ssh problems above.

(please excuse the novice bash scripting)

Appreciate it. And your scripting looks fine to me.
 
Hardware Platform: Tigo CCA2
Firmware Version: 3.7.4-ct
Mgate Version G8.59\rJul 6 2020\r16:51:51\rGW-H158.4.3S0.12\r

I have to go to https://[cca ip address]/cgi-bin/shell if the unit has rebooted to re-enable ssh access.

I didn't think Busybox had locate but was going to check that as well except see ssh problems above.
You're right. I used "which". It was in /usr/sbin

Mine doesn't have any iptables chains according to 'iptables -L', more specifically not a 'nat' table. As a result when I run the above (with my CCA's IP address instead) I get 'iptables: No chain/target/match by that name.'
You actually want to run exactly this command. Don't change the IP. There's something about fooling into thinking you're connecting from that address scheme, even if you're not. (I didn't understand it, but you can find more details on the German forum -- Chrome's built-in translation works pretty well).
 
Hardware Platform: Tigo CCA2
Firmware Version: 3.7.4-ct
Mgate Version G8.59\rJul 6 2020\r16:51:51\rGW-H158.4.3S0.12\r

I have to go to https://[cca ip address]/cgi-bin/shell if the unit has rebooted to re-enable ssh access.


You're right. I used "which". It was in /usr/sbin


You actually want to run exactly this command. Don't change the IP. There's something about fooling into thinking you're connecting from that address scheme, even if you're not. (I didn't understand it, but you can find more details on the German forum -- Chrome's built-in translation works pretty well).
I'm far past the /cgi-bin/shell stage. ssh is sporadically negotiating a connection, exchanging keys and sometimes I even get to a shell prompt before the connection hangs and eventually times out. I even switched the unit from Wifi to hard-wired and my Cisco switch shows no interface errors. Oddly it's negotiating 10/half instead of 100/full but that could just be their hardware. I've tried handing it a different dhcp lease and moving its IP address. It's odd that while sshd is failing, ping just keeps rolling along with short pingtimes and no packet loss. If it wasn't for "ping" working I'd swear it was a cabling or infrastructure problem.

Anyway, I'm sure it's something on my end.
 
Back
Top