@newflexgetman wrote:
Hi,
Here is my configuration. But, I cannot successfully get the {{title}}.crawljob. I post my config and the output. Here I just post the part of the output-it has an error message under below. Wish some help. Thanks.templates: magazine: regexp: from: title accept: - '(wanted_collection).*' tasks: rlsbb_task: priority: 1 verify_ssl_certificates: no rss: http://rlsbb.ru/category/catalogy/ebooks-magazines/feed/ template: magazine rlsbb: parse_comments: no filehosters_re: - rlsbb\.ru.* link_text_re: - UPLOADGiG - NiTROFLARE - RAPiDGATOR exec: - echo "text={{urls}}" >> "/media/file/jdfile/folderwatch/{{title}}.crawljob`
I got the error code which is within the part of output, when I executed the command
flexget --text execute --tasks rlsbb_task
.Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flexget/plugins/internal/urlrewriting.py", line 69, in url_rewrite
urlrewriter.instance.url_rewrite(task, entry)
File "/usr/local/lib/python2.7/dist-packages/flexget/plugin.py", line 118, in wrapped_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flexget/plugins/sites/rlsbb.py", line 118, in url_rewrite
log.debug('Original urls: %s', str(entry['urls']))
File "/usr/local/lib/python2.7/dist-packages/flexget/utils/lazy_dict.py", line 71, in getitem
item = self.store[key]
KeyError: u'urls'
Posts: 4
Participants: 2