@Robur wrote:
I have tried (see below) using a first task to populate a custom list (list_add) and then use a second task to feed it to html plugin (entry_list) but it does work.
The think is: I need a double html parse. First to scrap-filter a single index front page with links to secondary pages with alternative URLs for download. Second, I need to filter-grab the URL's/links from within these secondary pages pointing to a particular domain and send them to Pyload.
It's easy to make each step work on it's own. But, how to automate it, so that all URLs grabbed in the first step are passed to be processed on the second? (Increment does not seem to apply has urls are based on different name combinations).
Thanks in advance for any ideas on the proper syntax/plugin combination to achieve this.
get_magazines_3: priority: 24 html: http://website.com/category/indexpage regexp: accept: - Whatever rest: reject list_add: - entry_list: pds_to_download get_magazines_4: priority: 25 entry_list: pds_to_download accept_all: yes html: url regexp: accept: - hosterx.net rest: reject exec: allow_background: no auto_escape: no on_output: for_accepted: pyLoadCli add "{{title}}" {{url}}
EDIT:
I understand the below SHOULD work instead, but a bug seems to exist with pyload plugin and python 2.7., hence the need to go for pyLoadCli and try to do all the parsing work with Flexget.
get_magazines_3: priority: 24 html: http://somewebsite.com/keyword/ regexp: accept: - One Magazine rest: reject pyload: api: http://192.168.1.4:8000/api username: ********* password: ********** # package: "{{url}}" queue: yes parse_url: yes hoster: - UploadedTo multiple_hoster: no enabled: yes
Posts: 3
Participants: 2