@raidyne wrote:
hi,
i would like to extract links from an rss feed and write it into a crawljob file for jdownloader2. problem is i'm already doing it with "manipulate" and "url" but sometimes the rss feed contains more than one link (multi part files) per item that i would like to extract. and this seems to mess up "url".
how can i achieve this?
example:
if the rss feed contains an item with the links:
http://hoster.com/file-part1.html
http://hoster.com/file-part2.htmli want both links to be written in the crawljob file, separated by semicolon
the relevant part of my configuration looks like this at the moment:
mytask: rss: http://myrss.feed manipulate: - url: from: description separator: ';' extract: '(http[s]?:\/\/(\S+)?(?:hostera\.|hosterb\.)\S+)' all_series: yes exists_series: /path/to/TV Shows/ exec: - echo "text={{url}}" >> "/path/to/folderwatch/{{title}}.crawljob" - echo "autoConfirm=TRUE" >> "/path/to/folderwatch/{{title}}.crawljob" - echo "packageName={{title}}" >> "/path/to/folderwatch/{{title}}.crawljob"
thanks
Posts: 1
Participants: 1