I got an automatic export of any dude graph with cURL.
The way I have got to do it is running a script each five minutes, as a Windows scheduled task.
The windows script is like this
Code: Select all
@echo off
"c:\Program files\curl\curl" -s -d process=login -d page=start -d user=JohnSmith -d password=Pocahontas -c cookiecurl.tmp "http://dudeserver.myenterprise.com:81/dude/main.html" > dude2web.log
rem
rem
echo Servers >> dude2web.log
copy "C:\Program Files\Dude\data\files\Servers.png" C:\wamp\www\dude\servers.png >> dude2web.log
"c:\Program files\curl\curl" -b cookiecurl.tmp -o C:\wamp\www\dude\server1cpu.png "http://dudeserver.myenterprise.com:81/dude/chart.png?page=chart_picture&download=yes&id=248168&type=0&num=1" >> dude2web.log
"c:\Program files\curl\curl" -b cookiecurl.tmp -o C:\wamp\www\dude\server2disk.png "http://dudeserver.myenterprise.com:81/dude/chart.png?page=chart_picture&download=yes&id=225764&type=0&num=1" >> dude2web.log
rem
rem
echo Network >> dude2web.log
copy "C:\Program Files\Dude\data\files\Network.png" C:\wamp\www\dude\network.png >> dude2web.log
"c:\Program files\curl\curl" -b cookiecurl.tmp -o C:\wamp\www\dude\routertraffic.png "http://dudeserver.myenterprise.com:81/dude/chart.png?page=chart_picture&download=yes&id=2956126&type=2&num=0" >> dude2web.log
"c:\Program files\curl\curl" -b cookiecurl.tmp -o C:\wamp\www\dude\packetloss.png "http://dudeserver.myenterprise.com:81/dude/chart.png?page=chart_picture&download=yes&id=6736008&type=0&num=1" >> dude2web.log
...
"c:\Program files\curl\curl" -s -b cookiecurl.tmp "http://dudeserver.myenterprise.com:81/dude/login.html?drop_cookie=true" >> dude2web.log
In this example, my dude server is http://dudeserver.myenterprise.com:81 , therefore the dude web service is running in the port 81.
Note that every cURL invocation is one single line from "c:\Program files\curl\curl... to >> dude2web.log". If you see it split in two lines is due to the forum engine or your web browser.
The first cURL invocation is not a HTTP GET, but a post. In the post I say to the dude's web server the following data:
- process=login
- page=start
- user=JohnSmith
- password=Pocahontas
Please change the user and the password to any that can navigate in the web interface of your Dude server.
The first cURL invocation gets a cookie from the dude's web server and stores it to a file named cookiecurl.tmp (option -d)
The echo is because I want a log. You can remove this line.
The copy line is to copy a network map from the dude folder to my web server folder. I have configured dude to export the map each 5 minutes through the export tab. It creates a png file. But you're not reading this post for the automatic export, do you?
The subsequent cURL invocations, for instance
"c:\Program files\curl\curl" -b cookiecurl.tmp -o C:\wamp\www\dude\server1cpu.png "http://dudeserver.myenterprise.com:81/d ... pe=0&num=1" >> dude2web.log
- makes the call with the cookie stored in cookiecurl.tmp (option -b)
- retrieves the data -whatever it is- and stores it in a file like C:\wamp\www\dude\server1cpu.png (option -o)
- the data retrieved is specified in the url.
- after the url there is a >> dude2web.log that writes to a log. Yes if you don't want logs you can remove this part also.
The last cURL invocation is the http session logout for the cURL client. If the logout is not performed, then, later, you will NOT get new images but the same ones, except if you make the next image retrieval when the session timeout is expired (Settings -> Server tab -> Web Access parameters -> Session Timeout, the default is 00:15:00)
How do I know which are the urls:
1 Open the browser and log on to the dude server.
2 Browse and locate the image to export
3 Depending on your browser:
In Microsoft Internet Explorer, right click in the image and select properties. There is the url
In Mozilla Firefox, right click in the image and select "Copy the image path". Now you have the url in the clipboard.
4 remember to put it between quotes "
DISCLAIMER
This is working for me. It will possibly work for you also, but it depends on two softwares that aren't under my control (cURL and The Dude itself), so I can't assure it will work (at least, eternally).
The url for each image seems not to change as long as the object does not changes. But it can change tomorrow, due to the Dude behaviour or because you have done changes in the Dude configuration. It should be a Dude engineer the one that states or claims that the link to a graph is permanent.
Simpy, I have it working at my work during one week, and it looks like it works. That's all.
Regards.
POST EDIT: This is running flawlessly since I wrote it three months ago, and I'm really happy with this solution.