清单的API文档

编写清单的API由”BasicNewsRecipe”定义

class calibre.web.feeds.news.BasicNewsRecipe(options, log, progress_reporter)[源代码]

包含所有新闻源所需逻辑的基类。 通过逐渐覆盖此类中的更多功能,您可以逐渐制作更加定制/强大的新闻源。 有关创建新闻源的教程介绍,请参阅“新闻”。

abort_article(msg=None)[源代码]

在任何预处理方法中调用此方法可中止当前文章的下载。 有助于跳过包含不当内容的文章,例如纯视频文章。

abort_recipe_processing(msg)[源代码]

导致新闻源下载系统中止此新闻源的下载,并向用户显示一条简单的反馈消息。

add_toc_thumbnail(article, src)[源代码]

从 populate_article_metadata 调用此方法,并使用文章中 <img>标记的 src 属性,该标记适合用作代表目录中文章的缩略图。 缩略图是否实际使用取决于设备(目前仅由 Kindle 使用)。 注意,引用的图片必须是已成功下载的图片,否则会被忽略。

classmethod adeify_images(soup)[源代码]

如果您的新闻在转换为 EPUB 后在 Adobe Digital Editions 中查看时出现图像问题,请从“postprocess_html”中调用此方法。

canonicalize_internal_url(url, is_link=True)[源代码]

返回一组“url”的规范表示。 默认实现仅使用服务器主机名和 URL 路径,忽略任何查询参数、片段等。规范表示在该新闻源的所有 URL 中必须是唯一的。 如果不是,则内部链接可能无法正确解析。

参数:

is_link – 如果 URL 来自 HTML 文件中的内部链接,则为 True。 如果 URL 是用于下载文章的 URL,则为 False。

cleanup()[源代码]

下载完所有文章后调用。 用它来进行任何清理,例如注销订阅网站等。

clone_browser(br)[源代码]

克隆浏览器br。 克隆浏览器用于多线程下载,因为 mechanize 不是线程安全的。 默认克隆例程应该捕获大多数浏览器自定义,但如果您在新闻源中做了一些奇怪的事情,则应该在新闻源中覆盖此方法并手动克隆。

默认情况下,克隆的浏览器实例使用相同的线程安全 CookieJar,除非您自定义了 cookie 处理。

default_cover(cover_file)[源代码]

为没有封面的新闻源创建通用封面

download()[源代码]

下载并预处理本新闻源中提要中的所有文章。 该方法只能在特定的 Recipe 实例上调用一次。 多次调用它会导致未定义的行为。 index.html 的路径

extract_readable_article(html, url)[源代码]

从“html”中提取主要文章内容,进行清理并以(article_html,extracted_title)元组形式返回。 基于Arc90原创的可读性算法。

get_article_url(article)[源代码]

在子类中重写以自定义指向每篇文章内容的“URL”的提取。 返回文章网址。 它通过“article”来调用,“article”是一个表示来自提要的已解析文章的对象。 请参阅“feedparser <https://pythonhosted.org/feedparser/>”_。 默认情况下,它会查找原始链接(通过 FeedBurner 或 Pheedo 等服务联合供稿),如果找到,则返回该链接,否则返回“https://pythonhosted.org/feedparser/reference-entry-link.html>`_。

get_browser(*args, **kwargs)[源代码]

返回用于从网络获取文档的浏览器实例。 默认情况下,它返回一个 mechanize 浏览器实例,该实例支持 cookie、忽略 robots.txt、处理刷新并具有 mozilla firefox 用户代理。

如果您的新闻源要求您先登录,请在您的子类中重写此方法。 例如,《纽约时报》新闻源中使用以下代码来登录以获得完全访问权限:

def get_browser(self):
    br = BasicNewsRecipe.get_browser(self)
    if self.username is not None and self.password is not None:
        br.open('https://www.nytimes.com/auth/login')
        br.select_form(name='login')
        br['USERID']   = self.username
        br['PASSWORD'] = self.password
        br.submit()
    return br
get_cover_url()[源代码]

返回此问题封面图片的“URL”或“None”。 默认情况下,它返回成员“self.cover_url”的值,通常为“None”。 如果您希望您的新闻源下载电子书的封面,请在子类中重写此方法,或者在调用此方法之前设置成员变量“self.cover_url”。

get_extra_css()[源代码]

默认情况下返回“self.extra_css”。 如果您想以编程方式生成 extra_css,请覆盖。

get_feeds()[源代码]

返回要为此配置文件获取的“RSS”提要列表。 列表中的每个元素必须是 (title, url) 形式的 2 元素元组。 如果 title 为 None 或空字符串,则使用提要中的标题。 如果您的新闻源需要进行一些处理来找出要下载的提要列表,则此方法非常有用。 如果是这样,请在您的子类中重写。

get_masthead_title()[源代码]

在子类中重写以使用新闻源标题以外的内容

get_masthead_url()[源代码]

返回此问题的标头图片的“URL”或“无”。 默认情况下,它返回成员“self.masthead_url”的值,通常为“None”。 如果您希望您的新闻源下载电子书的标头,请在子类中重写此方法,或者在调用此方法之前设置成员变量 self.masthead_url 。 标头图像用于 Kindle MOBI 文件中。

get_obfuscated_article(url)[源代码]

If you set articles_are_obfuscated this method is called with every article URL. It should return the path to a file on the filesystem that contains the article HTML. That file is processed by the recursive HTML fetching engine, so it can contain links to pages/images on the web. Alternately, you can return a dictionary of the form: {‘data’: <HTML data>, ‘url’: <the resolved URL of the article>}. This avoids needing to create temporary files. The url key in the dictionary is useful if the effective URL of the article is different from the URL passed into this method, for example, because of redirects. It can be omitted if the URL is unchanged.

This method is typically useful for sites that try to make it difficult to access article content automatically.

get_url_specific_delay(url)[源代码]

Return the delay in seconds before downloading this URL. If you want to programmatically determine the delay for the specified URL, override this method in your subclass, returning self.delay by default for URLs you do not want to affect.

返回:

A floating point number, the delay in seconds.

classmethod image_url_processor(baseurl, url)[源代码]

Perform some processing on image urls (perhaps removing size restrictions for dynamically generated images, etc.) and return the precessed URL. Return None or an empty string to skip fetching the image.

index_to_soup(url_or_raw, raw=False, as_tree=False, save_raw=None)[源代码]

Convenience method that takes an URL to the index page and returns a BeautifulSoup of it.

url_or_raw: Either a URL or the downloaded index page as a string

Return True if the link should be followed or False otherwise. By default, raises NotImplementedError which causes the downloader to ignore it.

参数:
  • url – The URL to be followed

  • tag – The tag from which the URL was derived

parse_feeds()[源代码]

Create a list of articles from the list of feeds returned by BasicNewsRecipe.get_feeds(). Return a list of Feed objects.

parse_index()[源代码]

This method should be implemented in recipes that parse a website instead of feeds to generate a list of articles. Typical uses are for news sources that have a “Print Edition” webpage that lists all the articles in the current print edition. If this function is implemented, it will be used in preference to BasicNewsRecipe.parse_feeds().

It must return a list. Each element of the list must be a 2-element tuple of the form ('feed title', list of articles).

Each list of articles must contain dictionaries of the form:

{
'title'       : article title,
'url'         : URL of print version,
'date'        : The publication date of the article as a string,
'description' : A summary of the article
'content'     : The full article (can be an empty string). Obsolete
                do not use, instead save the content to a temporary
                file and pass a file:///path/to/temp/file.html as
                the URL.
}

For an example, see the recipe for downloading The Atlantic. In addition, you can add ‘author’ for the author of the article.

If you want to abort processing for some reason and have calibre show the user a simple message instead of an error, call abort_recipe_processing().

populate_article_metadata(article, soup, first)[源代码]

Called when each HTML page belonging to article is downloaded. Intended to be used to get article metadata like author/summary/etc. from the parsed HTML (soup).

参数:
  • article – A object of class calibre.web.feeds.Article. If you change the summary, remember to also change the text_summary

  • soup – Parsed HTML belonging to this article

  • first – True iff the parsed HTML is the first page of the article.

postprocess_book(oeb, opts, log)[源代码]

Run any needed post processing on the parsed downloaded e-book.

参数:
  • oeb – An OEBBook object

  • opts – Conversion options

postprocess_html(soup, first_fetch)[源代码]

This method is called with the source of each downloaded HTML file, after it is parsed for links and images. It can be used to do arbitrarily powerful post-processing on the HTML. It should return soup after processing it.

参数:
  • soup – A BeautifulSoup instance containing the downloaded HTML.

  • first_fetch – True if this is the first page of an article.

preprocess_html(soup)[源代码]

This method is called with the source of each downloaded HTML file, before it is parsed for links and images. It is called after the cleanup as specified by remove_tags etc. It can be used to do arbitrarily powerful pre-processing on the HTML. It should return soup after processing it.

soup: A BeautifulSoup instance containing the downloaded HTML.

preprocess_image(img_data, image_url)[源代码]

Perform some processing on downloaded image data. This is called on the raw data before any resizing is done. Must return the processed raw data. Return None to skip the image.

preprocess_raw_html(raw_html, url)[源代码]

This method is called with the source of each downloaded HTML file, before it is parsed into an object tree. raw_html is a unicode string representing the raw HTML downloaded from the web. url is the URL from which the HTML was downloaded.

Note that this method acts before preprocess_regexps.

This method must return the processed raw_html as a unicode object.

classmethod print_version(url)[源代码]

Take a url pointing to the webpage with article content and return the URL pointing to the print version of the article. By default does nothing. For example:

def print_version(self, url):
    return url + '?&pagewanted=print'
publication_date()[源代码]

Use this method to set the date when this issue was published. Defaults to the moment of download. Must return a datetime.datetime object.

skip_ad_pages(soup)[源代码]

This method is called with the source of each downloaded HTML file, before any of the cleanup attributes like remove_tags, keep_only_tags are applied. Note that preprocess_regexps will have already been applied. It is meant to allow the recipe to skip ad pages. If the soup represents an ad page, return the HTML of the real page. Otherwise return None.

soup: A BeautifulSoup instance containing the downloaded HTML.

sort_index_by(index, weights)[源代码]

Convenience method to sort the titles in index according to weights. index is sorted in place. Returns index.

index: A list of titles.

weights: A dictionary that maps weights to titles. If any titles in index are not in weights, they are assumed to have a weight of 0.

classmethod tag_to_string(tag, use_alt=True, normalize_whitespace=True)[源代码]

Convenience method to take a BeautifulSoup Tag and extract the text from it recursively, including any CDATA sections and alt tag attributes. Return a possibly empty Unicode string.

use_alt: If True try to use the alt attribute for tags that don’t have any textual content

tag: BeautifulSoup Tag

articles_are_obfuscated = False

Set to True and implement get_obfuscated_article() to handle websites that try to make it difficult to scrape content.

auto_cleanup = False

Automatically extract all the text from downloaded article pages. Uses the algorithms from the readability project. Setting this to True, means that you do not have to worry about cleaning up the downloaded HTML manually (though manual cleanup will always be superior).

auto_cleanup_keep = None

Specify elements that the auto cleanup algorithm should never remove. The syntax is a XPath expression. For example:

auto_cleanup_keep = '//div[@id="article-image"]' will keep all divs with
                                               id="article-image"
auto_cleanup_keep = '//*[@class="important"]' will keep all elements
                                            with class="important"
auto_cleanup_keep = '//div[@id="article-image"]|//span[@class="important"]'
                  will keep all divs with id="article-image" and spans
                  with class="important"
center_navbar = True

If True the navigation bar is center aligned, otherwise it is left aligned

compress_news_images = False

Set this to False to ignore all scaling and compression parameters and pass images through unmodified. If True and the other compression parameters are left at their default values, JPEG images will be scaled to fit in the screen dimensions set by the output profile and compressed to size at most (w * h)/16 where w x h are the scaled image dimensions.

compress_news_images_auto_size = 16

The factor used when auto compressing JPEG images. If set to None, auto compression is disabled. Otherwise, the images will be reduced in size to (w * h)/compress_news_images_auto_size bytes if possible by reducing the quality level, where w x h are the image dimensions in pixels. The minimum JPEG quality will be 5/100 so it is possible this constraint will not be met. This parameter can be overridden by the parameter compress_news_images_max_size which provides a fixed maximum size for images. Note that if you enable scale_news_images_to_device then the image will first be scaled and then its quality lowered until its size is less than (w * h)/factor where w and h are now the scaled image dimensions. In other words, this compression happens after scaling.

compress_news_images_max_size = None

Set JPEG quality so images do not exceed the size given (in KBytes). If set, this parameter overrides auto compression via compress_news_images_auto_size. The minimum JPEG quality will be 5/100 so it is possible this constraint will not be met.

conversion_options = {}

Recipe specific options to control the conversion of the downloaded content into an e-book. These will override any user or plugin specified values, so only use if absolutely necessary. For example:

conversion_options = {
  'base_font_size'   : 16,
  'linearize_tables' : True,
}
cover_margins = (0, 0, '#ffffff')

By default, the cover image returned by get_cover_url() will be used as the cover for the periodical. Overriding this in your recipe instructs calibre to render the downloaded cover into a frame whose width and height are expressed as a percentage of the downloaded cover. cover_margins = (10, 15, ‘#ffffff’) pads the cover with a white margin 10px on the left and right, 15px on the top and bottom. Color names are defined here. Note that for some reason, white does not always work in Windows. Use #ffffff instead

delay = 0

The default delay between consecutive downloads in seconds. The argument may be a floating point number to indicate a more precise time. See get_url_specific_delay() to implement per URL delays.

description = ''

A couple of lines that describe the content this recipe downloads. This will be used primarily in a GUI that presents a list of recipes.

encoding = None

Specify an override encoding for sites that have an incorrect charset specification. The most common being specifying latin1 and using cp1252. If None, try to detect the encoding. If it is a callable, the callable is called with two arguments: The recipe object and the source to be decoded. It must return the decoded source.

extra_css = None

Specify any extra CSS that should be added to downloaded HTML files. It will be inserted into <style> tags, just before the closing </head> tag thereby overriding all CSS except that which is declared using the style attribute on individual HTML tags. Note that if you want to programmatically generate the extra_css override the get_extra_css() method instead. For example:

extra_css = '.heading { font: serif x-large }'
feeds = None

List of feeds to download. Can be either [url1, url2, ...] or [('title1', url1), ('title2', url2),...]

filter_regexps = []

List of regular expressions that determines which links to ignore. If empty it is ignored. Used only if is_link_wanted is not implemented. For example:

filter_regexps = [r'ads\.doubleclick\.net']

will remove all URLs that have ads.doubleclick.net in them.

Only one of BasicNewsRecipe.match_regexps or BasicNewsRecipe.filter_regexps should be defined.

handle_gzip = True

Set to False if you do not want to use gzipped transfers. Note that some old servers flake out with gzip

ignore_duplicate_articles = None

Ignore duplicates of articles that are present in more than one section. A duplicate article is an article that has the same title and/or URL. To ignore articles with the same title, set this to:

ignore_duplicate_articles = {'title'}

To use URLs instead, set it to:

ignore_duplicate_articles = {'url'}

To match on title or URL, set it to:

ignore_duplicate_articles = {'title', 'url'}
keep_only_tags = []

Keep only the specified tags and their children. For the format for specifying a tag see BasicNewsRecipe.remove_tags. If this list is not empty, then the <body> tag will be emptied and re-filled with the tags that match the entries in this list. For example:

keep_only_tags = [dict(id=['content', 'heading'])]

will keep only tags that have an id attribute of “content” or “heading”.

language = 'und'

The language that the news is in. Must be an ISO-639 code either two or three characters long

masthead_url = None

By default, calibre will use a default image for the masthead (Kindle only). Override this in your recipe to provide a URL to use as a masthead.

match_regexps = []

List of regular expressions that determines which links to follow. If empty, it is ignored. Used only if is_link_wanted is not implemented. For example:

match_regexps = [r'page=[0-9]+']

will match all URLs that have page=some number in them.

Only one of BasicNewsRecipe.match_regexps or BasicNewsRecipe.filter_regexps should be defined.

max_articles_per_feed = 100

Maximum number of articles to download from each feed. This is primarily useful for feeds that don’t have article dates. For most feeds, you should use BasicNewsRecipe.oldest_article

needs_subscription = False

If True the GUI will ask the user for a username and password to use while downloading. If set to “optional” the use of a username and password becomes optional

no_stylesheets = False

Convenient flag to disable loading of stylesheets for websites that have overly complex stylesheets unsuitable for conversion to e-book formats. If True stylesheets are not downloaded and processed

oldest_article = 7.0

Oldest article to download from this news source. In days.

preprocess_regexps = []

List of regexp substitution rules to run on the downloaded HTML. Each element of the list should be a two element tuple. The first element of the tuple should be a compiled regular expression and the second a callable that takes a single match object and returns a string to replace the match. For example:

preprocess_regexps = [
   (re.compile(r'<!--Article ends here-->.*</body>', re.DOTALL|re.IGNORECASE),
    lambda match: '</body>'),
]

will remove everything from <!–Article ends here–> to </body>.

publication_type = 'unknown'

Publication type Set to newspaper, magazine or blog. If set to None, no publication type metadata will be written to the opf file.

recipe_disabled = None

Set to a non empty string to disable this recipe. The string will be used as the disabled message

recursions = 0

Number of levels of links to follow on article webpages

remove_attributes = []

List of attributes to remove from all tags. For example:

remove_attributes = ['style', 'font']
remove_empty_feeds = False

If True empty feeds are removed from the output. This option has no effect if parse_index is overridden in the sub class. It is meant only for recipes that return a list of feeds using feeds or get_feeds(). It is also used if you use the ignore_duplicate_articles option.

remove_javascript = True

Convenient flag to strip all JavaScript tags from the downloaded HTML

remove_tags = []

List of tags to be removed. Specified tags are removed from downloaded HTML. A tag is specified as a dictionary of the form:

{
 name      : 'tag name',   #e.g. 'div'
 attrs     : a dictionary, #e.g. {'class': 'advertisment'}
}

All keys are optional. For a full explanation of the search criteria, see Beautiful Soup A common example:

remove_tags = [dict(name='div', class_='advert')]

This will remove all <div class=”advert”> tags and all their children from the downloaded HTML.

remove_tags_after = None

Remove all tags that occur after the specified tag. For the format for specifying a tag see BasicNewsRecipe.remove_tags. For example:

remove_tags_after = [dict(id='content')]

will remove all tags after the first element with id=”content”.

remove_tags_before = None

Remove all tags that occur before the specified tag. For the format for specifying a tag see BasicNewsRecipe.remove_tags. For example:

remove_tags_before = dict(id='content')

will remove all tags before the first element with id=”content”.

requires_version = (0, 6, 0)

Minimum calibre version needed to use this recipe

If set to True then links in downloaded articles that point to other downloaded articles are changed to point to the downloaded copy of the article rather than its original web URL. If you set this to True, you might also need to implement canonicalize_internal_url() to work with the URL scheme of your particular website.

reverse_article_order = False

Reverse the order of articles in each feed

scale_news_images = None

Maximum dimensions (w,h) to scale images to. If scale_news_images_to_device is True this is set to the device screen dimensions set by the output profile unless there is no profile set, in which case it is left at whatever value it has been assigned (default None).

scale_news_images_to_device = True

Rescale images to fit in the device screen dimensions set by the output profile. Ignored if no output profile is set.

simultaneous_downloads = 5

Number of simultaneous downloads. Set to 1 if the server is picky. Automatically reduced to 1 if BasicNewsRecipe.delay > 0

summary_length = 500

Max number of characters in the short description

template_css = '\n            .article_date {\n                color: gray; font-family: monospace;\n            }\n\n            .article_description {\n                text-indent: 0pt;\n            }\n\n            a.article {\n                font-weight: bold; text-align:left;\n            }\n\n            a.feed {\n                font-weight: bold;\n            }\n\n            .calibre_navbar {\n                font-family:monospace;\n            }\n    '

The CSS that is used to style the templates, i.e., the navigation bars and the Tables of Contents. Rather than overriding this variable, you should use extra_css in your recipe to customize look and feel.

timefmt = ' [%a, %d %b %Y]'

The format string for the date shown on the first page. By default: Day_Name, Day_Number Month_Name Year

timeout = 120.0

Timeout for fetching files from server in seconds

title = '未知新闻来源'

The title to use for the e-book

use_embedded_content = None

Normally we try to guess if a feed has full articles embedded in it based on the length of the embedded content. If None, then the default guessing is used. If True then the we always assume the feeds has embedded content and if False we always assume the feed does not have embedded content.