Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

how do I save a subsite for offline viewing?


  • Please log in to reply
17 replies to this topic

#1 argonvegell

argonvegell

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 17 August 2017 - 08:40 AM

I'm running Xubuntu 16.04 LTS, and I'm trying to find a way to save a subsite for offline viewing, not the entire site.

 

Before you suggest HTTRACK, I've tried this software and it has frustrated me to no end, so much so I practically yelled at my screen.

 

Excerpt from https://www.httrack.com/

HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility.

 

Easy-to-use? IT IS NOT EASY TO USE!!! Easy to use if you're going to copy an entire site, but not easy if you want to just copy a subsite.

 

Another another alternatives to HTTrack for Linux?


Edited by argonvegell, 17 August 2017 - 08:42 AM.


BC AdBot (Login to Remove)

 


#2 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 17 August 2017 - 09:51 AM

Try wget. You'll already have it installed.

https://stackoverflow.com/questions/11507198/wget-download-a-sub-directory

#3 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 17 August 2017 - 08:09 PM

I tried the wget solutions, they do download the subsite, but when trying them offline, the links do not work, i get a 'file not found' error.



#4 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 01:01 AM

I would need to know what you downloaded, and how you are trying to open it, to be able to advise.

#5 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 01:54 AM

I'm trying to download a game walkthrough subsite, here: http://en.uesp.net/wiki/Morrowind:Morrowind

 

I saved the html files here: /home/~/Documents/

I checked on /home/~/Documents/Morrowind:Morrowind, it opened up Firefox.

But when I try to navigate through it, when I try to click on, for example, Master Trainers, this is the result:
 

file:///wiki/Morrowind:Master_Trainers

File not found

Firefox can’t find the file at /wiki/Morrowind:Master_Trainers.

Check the file name for capitalisation or other typing errors.
Check to see if the file was moved, renamed or deleted.

 



#6 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 04:55 AM

It sounds like you didn't use the -k option. From wget --help:
 
-k, --convert-links make links in downloaded HTML or CSS point to local files.
Try this:
wget -rkp --no-parent http://en.uesp.net/wiki/Morrowind:Morrowind

Edited by Al1000, 18 August 2017 - 04:55 AM.


#7 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 05:25 AM

same result. at this point, i'm giving up on this. thanks for the assistance.

#8 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 05:39 AM

If you don't let the download complete, the -k option won't work. I would be surprised if the download completed since I posted the command 40 minutes ago.

#9 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 05:47 AM

Sorry for not explaining myself earlier.

The reason why I had to halt the process was because "wget -rkp --no-parent http://en.uesp.net/wiki/Morrowind:Morrowind" command was downloading other stuff that I didn't want like http://en.uesp.net/wiki/Skyrim:Official_Add-Ons.



#10 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 06:01 AM

I see. I thought you meant the command hadn't worked. It will indeed download all sub-directories as well.

Thanks for the explanation.

#11 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 06:10 AM

Here's how I was able to what I wanted:

wget -r http://en.uesp.net/wiki/Morrowind:Morrowind --spider 2>&1 | grep '2017' | grep 'wiki/Morrow' | grep saved | awk '{print $6}'

It printed all the links with Morrowind: on it, and I transfered the output into a list, so I did this command:

wget -i list.txt

However, the result was the downloaded html files weren't linked together, so I was unable to navigate through the web browser.

 

Anyway to integrate the -k option into wget -i list.txt?



#12 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 06:26 AM

What are you trying to do now? The --spider option says:

--spider                    don't download anything.


#13 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 06:29 AM

I found this command when I was Googling for solutions, the propose of the command is to list out all the Morrowind:* links so I can create a list for wget to use, hence the 'wget -i list.txt' command.



#14 Al1000

Al1000

  • Global Moderator
  • 7,881 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:03:20 AM

Posted 18 August 2017 - 06:31 AM

Can you post a link to where you found this command?

wget downloads data from the internet. I don't understand what you're trying to do here: wget -i list.txt

#15 argonvegell

argonvegell
  • Topic Starter

  • Members
  • 28 posts
  • OFFLINE
  •  
  • Local time:10:20 AM

Posted 18 August 2017 - 06:36 AM

Here: https://ubuntuforums.org/showthread.php?t=2368809&p=13676936#post13676936

 

I was also asking for help on Ubuntu's forums.

 

But as I said, the result was the downloaded links are unlinked.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users