blob: f179de2f9263da6303e765a134b54e2033e4ebd2 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
  | 
:mod:`urllib.robotparser` ---  Parser for robots.txt
====================================================
.. module:: urllib.robotparser
   :synopsis: Load a robots.txt file and answer questions about
              fetchability of other URLs.
.. sectionauthor:: Skip Montanaro <skip@pobox.com>
.. index::
   single: WWW
   single: World Wide Web
   single: URL
   single: robots.txt
This module provides a single class, :class:`RobotFileParser`, which answers
questions about whether or not a particular user agent can fetch a URL on the
Web site that published the :file:`robots.txt` file.  For more details on the
structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html.
.. class:: RobotFileParser(url='')
   This class provides methods to read, parse and answer questions about the
   :file:`robots.txt` file at *url*.
   .. method:: set_url(url)
      Sets the URL referring to a :file:`robots.txt` file.
   .. method:: read()
      Reads the :file:`robots.txt` URL and feeds it to the parser.
   .. method:: parse(lines)
      Parses the lines argument.
   .. method:: can_fetch(useragent, url)
      Returns ``True`` if the *useragent* is allowed to fetch the *url*
      according to the rules contained in the parsed :file:`robots.txt`
      file.
   .. method:: mtime()
      Returns the time the ``robots.txt`` file was last fetched.  This is
      useful for long-running web spiders that need to check for new
      ``robots.txt`` files periodically.
   .. method:: modified()
      Sets the time the ``robots.txt`` file was last fetched to the current
      time.
The following example demonstrates basic use of the RobotFileParser class.
   >>> import urllib.robotparser
   >>> rp = urllib.robotparser.RobotFileParser()
   >>> rp.set_url("http://www.musi-cal.com/robots.txt")
   >>> rp.read()
   >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
   False
   >>> rp.can_fetch("*", "http://www.musi-cal.com/")
   True
  |