How to restore websites from the Web Archive - Part 2

Published: 2019-12-04

In the previous article we examined operation, and in this article we will talk about a very important stage of site restoring from the Wayback Machine that relates to domain preparation for restoring. This step gives confidence that you will restore the maximum content on your site.

All the works described at this stage are connected with robots.txt rules. When indexes website, it ignores the rules recorded in robots.txt, but it saves the file itself. When looking at archived version through, it will show you files, designs, pictures that were saved ignoring robots.txt. But when restoring website using the web archive API, these files will not be saved, because the web archive begins to comply with the robots.txt rules that it saved during indexing. But this is not a problem, because web archive only takes into account the latest version of robots.txt and you can create it by yourself.

How to prepare a website for downloading from web archive?

  1. Buy the domain where website was hosted.
  2. Configure DNS records on the purchased domain and bind it to the web hosting.
  3. Create a robots.txt file with the following text:

User-agent: *


 And then place it to the website root you want to restore.

  1. Save robots.txt file with open indexing in the web archive database. It is done as follows:

There is a Save Page Now form on the main page of web archive:

Enter full url of the robots.txt file on the new domain in this form. Moreover, protocol for connecting new domain (http or https) doesn’t matter, because robots.txt will be similar. Indexing of the new robots.txt will be applied to all previously saved data, regardless of the past protocol (http or https.). So, save robots.txt by pressing the SAVE PAGE button.

Here we see robots.txt new version on the new domain and hosting, as well as the current timestamp. Please note that this does not happen immediately after you press the SAVE PAGE button. It should take about 24 hours before the new version of robots.txt appears in the web archive. If the file does not open, spits out an error or just a white screen, then just enter incognito mode or use another browser. Is you see a white screen or some errors, then the file has not been saved.

Let’s check indexability. Go to the general calendar, select Summery tool and then go to Explore tool.

This tool already works with web archive api. Here we can check whether indexing is now open or not. If the data table by file is downloaded, then search robots will index it, otherwise website is closed from indexing (indicated in robots.txt file). If folders are closed partially, their url are closed as well.

When downloading the table, we also need to see if there are any other robots.txt files on this domain. As we see in the example, there are other robots.txt files located in different directories, and their rules in these directories will take precedence over the root robots.txt file.

So, if you see several files, it is better to download all available files to be sure that all materials for saving are open.

In order not to search for another robots.txt files on the website, we made 3 files (download link), so that you need to upload them to the new domain hosting for everything to be done correctly.

These are robots.txt, .htaccess and index.php. Here are the contents of these files:

What is the purpose of these files?

  • All urls having robots.txt at the end will now show the contents of this file not from the directory, whatever it may be, but from the root directory. Thus, we solve the problem of creating additional folders and downloading additional robots.txt.
  • For all other requests besides .../robots.txt and all non-existing files and folders index.php will be loaded. It simply shows 503 code when opened. It is done so that when you bind domain to hosting, the search engine spider does not receive the 404 error response when it comes to your website. If this happens, then important content will not be indexed. Search engines that have received 503 response code consider website as if there is a technical work carried out, so search engine will come back later to index the updated content. In the index.php file you can see an additional Retry After line that stands for time measured in seconds for the search robot to return for indexing your website. So, if search engine robot visited you website, but you have not yet uploaded any content, then the search engine robot will come back later to check the site’s functionality. A repeat visit time of 3 days is already indicated in this file.
  • Since web archive is very slow, after adding a new robots.txt you need to wait at least 24 hours until the changes take effect. Only after such a period website can be checked for being available to Summary tool, after that you can start restoring your website. That is, after downloading these files, you can safely restore website on the purchased domain, and be sure that search engine spiders do not cache something wrong, for example, an open root directory structure or 404 errors.

Possible problems when restoring robots.txt

Example 1.

On the main page, we enter a link to the file: Then we see a calendar for this particular url with saving versions of this file.

Then we open the latest version on the calendar dated May 31. We see that it was robots.txt file from WordPress CMS.

WordPress is a separate situation, because this CMS often closes very important and necessary folders and files in robots.txt file.

In this example we see that the folder with themes is closed, that is, even the folder with media files can be closed. As a result, when you restore and browse website through the web archive, everything will be OK, but restored website will have the designs, styles, and texts that have come down. But if you perform the preparation stage correctly and open website for indexing using new downloads, this problem can be avoided.

Example 2.

On the main page, we enter a link:

Let’s open the last version on the calendar dated May 31.

Here we can see that the website is completely closed from indexing!

And this is not the site owner’s fault, but the problem is that when domain provider’s parking page was there, he uploaded he robots.txt file with such content. It’s not OK both for website restoring and search engines. Since when they visit and see such a website, they will begin to remove from the index everything that has been saved. New robots.txt file that you upload to a new hosting domain that opens the site contents for indexing may be a solution for such a problem.

The instructions that we provide are suitable for all websites, including those that had ain provider’s parking pages. This instruction will allow you not only restoring the maximum possible amount of old website content from the web archive, but restoring its position in the search as well.

In the next part we will discuss how to choose “before” date for your domain.


How to restore websites from the Web Archive - Part 1

How to restore websites from the Web Archive - Part 3

The use of article materials is allowed only if the link to the source is posted:

Latest news:
The new version of CMS has become more convenient and understandable for webmasters from around the world.

- Full localization of Archivarix CMS into 13 languages (English, Spanish, Italian, German, French, Portuguese, Polish, Turkish, Japanese, Chinese, Russian, Ukrainian, Belarusian).
- Export all current site data to a zip archive to save a backup or transfer to another site.
- Show and remove broken zip archives in import tools.
- PHP version check during installation.
- Information for installing CMS on a server with NGINX + PHP-FPM.
- In the search, when the expert mode is on, the date/time of the page and a link to its copy in the WebArchive are displayed.
- Improvements to the user interface.
- Code optimization.

If you are a native speaker of a language into which our CMS has not yet been translated, then we invite you to make our product even better. Via Crowdin service you can apply and become our official translator into new languages.
A new Archivarix CMS version.
- CLI support to deploy websites right from a command line, imports, settings, stats, history purge and system update.
- Support for password_hash() encrypted passwords that can be used in CLI.
- Expert mode to enable an additional debug information, experimental tools and direct links to WebArchive saved snapshots.
- Tools for broken internal images and links can now return a list of all missing urls instead of deleting.
- Import tool shows corrupted/incomplete zip files that can be removed.
- Improved cookie support to catch up with requirements of modern browsers.
- A setting to select default editor to HTML pages (visual editor or code).
- Changes tab showing text differences is off by default, can be turned on in settings.
- You can roll back to a specific change in the Changes tab.
- Fixed XML sitemap url for websites that are built with a www subdomain.
- Fixed removal temporary files that were created during an installation/import process.
- Faster history purge.
- Femoved unused localization phrases.
- Language switch on the login screen.
- Updated external packages to their most recent versions.
- Optimized memory usage for calculating text differences in the Changes tab.
- Improved support for old versions of php-dom extension.
- An experimental tool to fix file sizes in the database in case you edited files directly on a server.
- An experimental and very raw flat-structure export tool.
- An experimental public key support for the future API features.
The first June update of Archivarix CMS with new, convenient features.
- Fixed: History section did not work when there was no zip extension enabled in php.
- New History tab with details of changes when editing text files.
- .htaccess edit tool.
- Ability to clean up backups to the desired rollback point.
- "Missing URLs" section removed from Tools as it is accessible from the dashboard.
- Monitoring and showing free disk space in the dashboard.
- Improved check of the required PHP extensions on startup and initial installation.
- Minor cosmetic changes.
- All external tools updated to latest versions.
An update that web studios and those using outsourcing will appreciate.
- Separate password for safe mode.
- Extended safe mode. Now you can create custom rules and files, but without executable code.
- Reinstalling the site from the CMS without having to manually delete anything from the server.
- Ability to sort custom rules.
- Improved Search & Replace for very large sites.
- Additional settings for the "Viewport meta tag" tool.
- Support for IDN domains on hosting with the old version of ICU.
- In the initial installation with a password, the ability to log out is added.
- If .htaccess is detected during integration with WP, then the Archivarix rules will be added to its beginning.
- When downloading sites by serial number, CDN is used to increase speed.
- Other minor improvements and fixes.
Our Archivarix CMS is developing by leaps and bounds. The new update, in which the following appeared:
- New dashboard for viewing statistics, server settings and system updates.
- Ability to create templates and conveniently add new pages to the site.
- Integration with Wordpress and Joomla in one click.
- Now in Search & Replace, additional filtering is done in the form of a constructor, where you can add any number of rules.
- Now you can filter the results by domain/subdomains, date-time, file size.
- A new tool to reset the cache in Cloudlfare or enable / disable Dev Mode.
- A new tool for removing versioning in urls, for example, "?ver=1.2.3" in css or js. Allows you to repair even those pages that looked crooked in the WebArchive due to the lack of styles with different versions.
- The robots.txt tool has the ability to immediately enable and add a Sitemap map.
- Automatic and manual creation of rollback points for changes.
- Import can import templates.
- Saving/Importing settings of the loader contains the created custom files.
- For all actions that can last longer than a timeout, a progress bar is displayed.
- A tool to add a viewport meta tag to all pages of a site.
- Tools for removing broken links and images have the ability to account for files on the server.
- A new tool to fix incorrect urlencode links in html code. Rarely, but may come in handy.
- Improved missing urls tool. Together with the new loader, now counts calls to non-existent URLs.
- Regex Tips in Search & Replace.
- Improved checking for missing php extensions.
- Updated all used js tools to the latest versions.

This and many other cosmetic improvements and speed optimizations.