- marculix
- Posts : 1
Join date : 2014-05-24
Location : Switzerland
Selenium traces in log-files of remote webservers?
Sun May 25, 2014 12:30 am
Hi Selenium community
I'm using the Selenium PERL client to remotely connect to a website sporadically. That works great.
My question:
Will Selenium actually leave specific Selenium traces in the log-files of remote webservers that indicates to the admin, that an automated rc tool was qrabbing for data or submitting queries?
I assume, that only usual information will be sent to the remote HTTP server such as browser specific stuff: "Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:27.0) Gecko/20100101 Firefox/27.0"
Could someone verify/falsify this?
Perhaps, I do violate against a remote webserver policy and my concern is, that the related remote site admin will sooner or later block http queries from web-clients that appear to be job-scheduled automated robots (e.g by installing CAPTCHA).
My environment (plain vanilla stuff):
-Mac OSX 10.7.5 (Darwin Kernel Version 11.4.2)
-Perl 5.12 (revision 5 version 12 subversion 3)
-Java JRE 1.6.0_65-b14-462-11M4609
-Selenium Java standalone server 2.39.0
-Selenium IDE 2.5.0
-Firefox 27.0
-Firefox add-ons: Selenium IDE 2.5.0 and Selenium IDE: Perl Formatter
-Perl/Selenium is using standard element locators such as css, xpath, url.
Thanks for any helpful hints!
Marc
I'm using the Selenium PERL client to remotely connect to a website sporadically. That works great.
My question:
Will Selenium actually leave specific Selenium traces in the log-files of remote webservers that indicates to the admin, that an automated rc tool was qrabbing for data or submitting queries?
I assume, that only usual information will be sent to the remote HTTP server such as browser specific stuff: "Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:27.0) Gecko/20100101 Firefox/27.0"
Could someone verify/falsify this?
Perhaps, I do violate against a remote webserver policy and my concern is, that the related remote site admin will sooner or later block http queries from web-clients that appear to be job-scheduled automated robots (e.g by installing CAPTCHA).
My environment (plain vanilla stuff):
-Mac OSX 10.7.5 (Darwin Kernel Version 11.4.2)
-Perl 5.12 (revision 5 version 12 subversion 3)
-Java JRE 1.6.0_65-b14-462-11M4609
-Selenium Java standalone server 2.39.0
-Selenium IDE 2.5.0
-Firefox 27.0
-Firefox add-ons: Selenium IDE 2.5.0 and Selenium IDE: Perl Formatter
-Perl/Selenium is using standard element locators such as css, xpath, url.
Thanks for any helpful hints!
Marc
Permissions in this forum:
You cannot reply to topics in this forum