FAQ Overview

****Migrated*** Configuring the SSL VPN Client Address Range

Configuring the SSL VPN Client Address Range
The SSL VPN Client Address Range defines the IP address pool from which addresses are assigned to remote users during NetExtender sessions. The range needs to be large enough to accommodate the maximum number of concurrent NetExtender users you wish to support plus one (for example, the range for 15 users requires 16 addresses, such as 192.168.200.100 to 192.168.200.115).

NOTE: The range must fall within the same subnet as the interface to which the SRA appliance is connected, and in cases where there are other hosts on the same segment as the SRA appliance, it must not overlap or collide with any assigned addresses.
To configure the SSL VPN Client Address Range, complete the following steps:

3 - Navigate to the SSL VPN > Client Settings page.In the NetExtender Start IP field, enter the first IP address in the client address range.

4 - In the NetExtender End IP field, enter the last IP address in the client address range.

5 - In the DNS Server 1 field, enter the IP address of the primary DNS server, or click the Default DNS Settings to use the default settings.

6 - (Optional) In the DNS Server 2 field, enter the IP address of the backup DNS server.

7 - (Optional) In the DNS Domain field, enter the domain name for the DNS servers.

8 - In the User Domain field, enter the domain name for the users. The value of this field must match the domain field in the NetExtender client.

9 - (Optional) In the WINS Server 1 field, enter the IP address of the primary WINS server.

10 - (Optional) In the WINS Server 2 field, enter the IP address of the backup WINS server.

11 - In the Interface pull-down menu, select the interface to be used for SSL VPN services.
* NOTE: The IP address range must be on the same subnet as the interface used for SSL VPN services.

12 - Click the Zone name at the top of the page to enable SSL VPN access on it with these settings. The indicator should be green for the Zone you want to enable.

http://help.sonicwall.com/help/sw/eng/7634/7/2/0/content/Configuring_SSLVPN.25.05.htm

Author: Angelo A Vitale
Last update: 2019-07-27 15:22


******Migrated*****Finding the Ethernet Hardware (MAC) Addresses of the SonicWall

Description

This article describes how to obtain the SonicWall MAC Address and its interfaces' MAC Addresses.

Resolution

The Ethernet hardware (MAC) addresses of the interfaces on any SonicWall appliance can be found using one of the following techniques:

  • Go to System | Status and look at the Serial Number. The MAC Address of the SonicWall will be the Serial Number with colon every two digits (i.e. 0017C5E1T74Y will become 00:17:C5:E1:T7:4Y)
  • Generate a Technical Support Report (TSR) from System | Diagnostics | Download Report and locate the MAC addresses for all interfaces in the Ethernet section of the report.
    • Observe that the appliance's serial number represents its LAN interface MAC address. You can also search for "Config MAC" under the X0 Interface: that will be the SonicWall MAC Address.TIP: The MAC addresses of other interfaces on the appliance will be variations on the last octet of the unit's serial number.
  • The MAC address of an interface on a firewall (UTM) appliance can be found:
    • On the GUI, go to Network | Interfaces, select the interface and click the Edit icon. Move to the Advanced tab and here you will find the interface's MAC Address.
    • On the TSR, search for Config MAC of the related interface.

Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

The Ethernet hardware (MAC) addresses of the interfaces on any SonicWall appliance can be found using one of the following techniques:

  • Go to Monitor | System Status and look at the Serial Number. The MAC Address of the SonicWall will be the Serial Number with colon every two digits (i.e. 0017C5E1T74Y will become 00:17:C5:E1:T7:4Y)
  • Generate a Technical Support Report (TSR) from Investigate | Tools | System Diagnostics | Download Report and locate the MAC addresses for all interfaces in the Ethernet section of the report.
    • Observe that the appliance's serial number represents its LAN interface MAC address. You can also search for "Config MAC" under the X0 Interface: that will be the SonicWall MAC Address.TIP: The MAC addresses of other interfaces on the appliance will be variations on the last octet of the unit's serial number.
  • The MAC address of an interface on a firewall (UTM) appliance can be found:
    • On the GUI, go to Manage | System Setup | Network | Interfaces, select the interface and click the Edit icon. Move to the Advanced tab and here you will find the interface's MAC Address.
    • On the TSR, search for Config MAC of the related interface.
      https://www.sonicwall.com/en-us/support/knowledge-base/170505739448725

Author: Angelo A Vitale
Last update: 2019-07-27 15:32


*****Migrated***** Add a user or contact to an Office 365 distribution list

Add a user or contact to an Office 365 distribution list

Applies To: Office 365 Admin Microsoft 365 Business
As the admin of an Office 365 organization, you may need to add one of your users or contacts to a distribution list (see Create distribution lists in Office 365.) For example, you can add employees or external partners or vendors to an email distribution list.

Add a user or contact to a distribution list

  1. Sign in to Office 365 with your work or school account.
  2. Select the app launcher icon App launcher button and choose Admin.
  3. Choose Groups in the left navigation pane.

    See your new Office 365 groups in the admin center preview
  4. On the Groups page, select the distribution list you want to add a contact to.
  5. In the Members section, click Edit.

    Screenshot: Add a contact to a distribution list
  6. On the View Members page, click or tap Add Members, and select the user or contact you want to add to the distribution list.

    Screenshot: Add members to distribution list
  7. Click Save and then Close.

If you haven't created the contact yet, do that first as shown in this video.


Learn how to send email as a distribution list in Office 365.


https://support.office.com/en-us/article/add-a-user-or-contact-to-an-office-365-distribution-list-ba256583-03ca-429e-be4d-a92d9c221ad6

Author: Angelo A Vitale
Last update: 2019-07-27 15:41


*****Migrated******Add or remove members from Office 365 groups using the Office 365 admin center

In Office 365, Group members typically create their own Groups, add themselves to Groups they want to join, or are invited by Group owners. If Group ownership changes, or if you determine that a member should be added or removed, as the admin you can also make that change. What is an Office 365 Group?

Add a member to a Group in the Office 365 admin center

  1. Sign in to Office 365 using your global admin or Exchange admin account. Browse to the Office 365 admin center.
  2. In the left navigation pane, choose Groups > Groups.
  3. Select a Group.
  4. In the details pane, next to Members, click Edit.

    Screen shot with Edit members link highlighted
  5. Search for or select the name of the member you want to add.
  6. Click Save.

Remove a member from a Group in the Office 365 admin center

Note: When you remove a member from a private group, it takes 5 minutes for the person to be blocked from the group (after membership changes are fully replicated among domain controllers).


  1. Browse to the Office 365 admin center.
  2. In the left navigation pane, choose Groups > Groups.
  3. Select a Group.
  4. In the details pane, next to Members, click Edit.
  5. Next to the member you want to remove, click Remove.
  6. Click Save to remove the member.

Manage Group owner status

By default, the person who created the group is the group owner. Often a group will have multiple owners for backup support or other reasons. Members can be promoted to owner status and owners can be demoted to member status.

Promote a member to owner status in the Office 365 admin center

  1. Navigate to the Office 365 admin center.
  2. In the left navigation pane, choose Groups > Groups.
  3. Select a Group.
  4. In the Bulk actions pane at the right of the screen, click Edit owners.
  5. Search for or select the name of the member you want to add.
  6. Click Add next to the member's name.
  7. Click Save.

Remove owner status in the Office 365 admin center

  1. Navigate to the Office 365 admin center.
  2. In the left navigation pane, choose Groups > Groups.
  3. Select a Group.
  4. In the details pane at the right of the screen, click Edit Owners.
  5. Click Remove next to the owner's name.
  6. Click Save.

More on managing membership

Articles about managing groups

Author: Angelo A Vitale
Last update: 2019-08-04 14:23


Android & iPhone

Setting up POP/IMAP Email on an Android (Jellybean)

Follow the guide below to set up POP/IMAP email on an Android device.

Step 1: Go to “Apps“.
Step 2: Go to “Email”.
Step 3: Click on the Menu” button.
Step 4: Go to “Settings“.
Step 5: Click on “+“.
Step 6: Enter your full email address and password for the email account. The description field can be filled in as you see fit.

Setting up POP/IMAP email on an Android (Jellybean) Image 1

Step 7: After hitting “next” you will select the desired protocol.

Setting up POP/IMAP email on an Android (Jellybean) Image 2

Step 8: Enter in our mail server information. For POP/IMAP server enter mail.noip.com and for username enter your full email address. Example “you@youremailaddress.com”. If the password field is not filled in, re-enter it again. Select Port 143 for inbound IMAP port. Select Port 146 for inbound POP.

Setting up POP/IMAP email on an Android (Jellybean) Image 3

Step 9: Enter in our mail server information. For the host SMTP server enter mail.noip.com and for username enter your full email address. Example “you@youremailaddress.com”. If the password field is not filled in, re-enter it again. Select Port 587 or 465 with SSL for outbound server.

Setting up POP/IMAP email on an Android (Jellybean) Image 4

Step 10: Proceed to the following steps on screen.

Author: Angelo A Vitale
Last update: 2019-06-15 12:47


How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 16.04/18.04

How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 18.04

Introduction

A "LAMP" stack is a group of open-source software that is typically installed together to enable a server to host dynamic websites and web apps. This term is actually an acronym which represents the Linux operating system, with the Apache web server. The site data is stored in a MySQL database, and dynamic content is processed by PHP.

In this guide, we will install a LAMP stack on an Ubuntu 18.04 server.

 

Prerequisites

In order to complete this tutorial, you will need to have an Ubuntu 18.04 server with a non-root sudo-enabled user account and a basic firewall. This can be configured using our initial server setup guide for Ubuntu 18.04.

 

Step 1 — Installing Apache and Updating the Firewall

The Apache web server is among the most popular web servers in the world. It's well-documented and has been in wide use for much of the history of the web, which makes it a great default choice for hosting a website.

Install Apache using Ubuntu's package manager, apt:

  • sudo apt update
  • sudo apt install apache2

Since this is a sudo command, these operations are executed with root privileges. It will ask you for your regular user's password to verify your intentions.

Once you've entered your password, apt will tell you which packages it plans to install and how much extra disk space they'll take up. Press Y and hit ENTER to continue, and the installation will proceed.

Adjust the Firewall to Allow Web Traffic

Next, assuming that you have followed the initial server setup instructions and enabled the UFW firewall, make sure that your firewall allows HTTP and HTTPS traffic. You can check that UFW has an application profile for Apache like so:

  • sudo ufw app list
Output
Available applications:
  Apache
  Apache Full
  Apache Secure
  OpenSSH

If you look at the Apache Full profile, it should show that it enables traffic to ports 80 and 443:

  • sudo ufw app info "Apache Full"
Output
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web
server.

Ports:
  80,443/tcp

Allow incoming HTTP and HTTPS traffic for this profile:

  • sudo ufw allow in "Apache Full"

You can do a spot check right away to verify that everything went as planned by visiting your server's public IP address in your web browser (see the note under the next heading to find out what your public IP address is if you do not have this information already):

http://your_server_ip

You will see the default Ubuntu 18.04 Apache web page, which is there for informational and testing purposes. It should look something like this:

Ubuntu 18.04 Apache default

If you see this page, then your web server is now correctly installed and accessible through your firewall.

How To Find your Server's Public IP Address

If you do not know what your server's public IP address is, there are a number of ways you can find it. Usually, this is the address you use to connect to your server through SSH.

There are a few different ways to do this from the command line. First, you could use the iproute2 tools to get your IP address by typing this:

  • ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

This will give you two or three lines back. They are all correct addresses, but your computer may only be able to use one of them, so feel free to try each one.

An alternative method is to use the curl utility to contact an outside party to tell you how it sees your server. This is done by asking a specific server what your IP address is:

  • sudo apt install curl
  • curl http://icanhazip.com

Regardless of the method you use to get your IP address, type it into your web browser's address bar to view the default Apache page.

 

Step 2 — Installing MySQL

Now that you have your web server up and running, it is time to install MySQL. MySQL is a database management system. Basically, it will organize and provide access to databases where your site can store information.

Again, use apt to acquire and install this software:

  • sudo apt install mysql-server

Note: In this case, you do not have to run sudo apt update prior to the command. This is because you recently ran it in the commands above to install Apache. The package index on your computer should already be up-to-date.

This command, too, will show you a list of the packages that will be installed, along with the amount of disk space they'll take up. Enter Y to continue.

When the installation is complete, run a simple security script that comes pre-installed with MySQL which will remove some dangerous defaults and lock down access to your database system. Start the interactive script by running:

  • sudo mysql_secure_installation

This will ask if you want to configure the VALIDATE PASSWORD PLUGIN.

Note: Enabling this feature is something of a judgment call. If enabled, passwords which don't match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.

Answer Y for yes, or anything else to continue without enabling.

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No:

If you answer “yes”, you'll be asked to select a level of password validation. Keep in mind that if you enter 2 for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1

Regardless of whether you chose to set up the VALIDATE PASSWORD PLUGIN, your server will next ask you to select and confirm a password for the MySQL root user. This is an administrative account in MySQL that has increased privileges. Think of it as being similar to the root account for the server itself (although the one you are configuring now is a MySQL-specific account). Make sure this is a strong, unique password, and do not leave it blank.

If you enabled password validation, you'll be shown the password strength for the root password you just entered and your server will ask if you want to change that password. If you are happy with your current password, enter N for "no" at the prompt:

Using existing password for root.

Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n

For the rest of the questions, press Y and hit the ENTER key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes you have made.

Note that in Ubuntu systems running MySQL 5.7 (and later versions), the root MySQL user is set to authenticate using the auth_socket plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) to access the user.

If you prefer to use a password when connecting to MySQL as root, you will need to switch its authentication method from auth_socket to mysql_native_password. To do this, open up the MySQL prompt from your terminal:

  • sudo mysql

Next, check which authentication method each of your MySQL user accounts use with the following command:

  • SELECT user,authentication_string,plugin,host FROM mysql.user;
Output
+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             |                                           | auth_socket           | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *CC744277A401A7D25BE1CA89AFF17BF607F876FF | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

In this example, you can see that the root user does in fact authenticate using the auth_socket plugin. To configure the root account to authenticate with a password, run the following ALTER USER command. Be sure to change password to a strong password of your choosing:

  • ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';

Then, run FLUSH PRIVILEGES which tells the server to reload the grant tables and put your new changes into effect:

  • FLUSH PRIVILEGES;

Check the authentication methods employed by each of your users again to confirm that root no longer authenticates using the auth_socket plugin:

  • SELECT user,authentication_string,plugin,host FROM mysql.user;
Output
+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root             | *3636DACC8616D997782ADD0839F92C1571D6D78F | mysql_native_password | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *CC744277A401A7D25BE1CA89AFF17BF607F876FF | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
4 rows in set (0.00 sec)

You can see in this example output that the root MySQL user now authenticates using a password. Once you confirm this on your own server, you can exit the MySQL shell:

  • exit

At this point, your database system is now set up and you can move on to installing PHP, the final component of the LAMP stack.

 

Step 3 — Installing PHP

PHP is the component of your setup that will process code to display dynamic content. It can run scripts, connect to your MySQL databases to get information, and hand the processed content over to your web server to display.

Once again, leverage the apt system to install PHP. In addition, include some helper packages this time so that PHP code can run under the Apache server and talk to your MySQL database:

  • sudo apt install php libapache2-mod-php php-mysql

This should install PHP without any problems. We'll test this in a moment.

In most cases, you will want to modify the way that Apache serves files when a directory is requested. Currently, if a user requests a directory from the server, Apache will first look for a file called index.html. We want to tell the web server to prefer PHP files over others, so make Apache look for an index.php file first.

To do this, type this command to open the dir.conf file in a text editor with root privileges:

  • sudo nano /etc/apache2/mods-enabled/dir.conf

It will look like this:

/etc/apache2/mods-enabled/dir.conf
<IfModule mod_dir.c>
    DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>

Move the PHP index file (highlighted above) to the first position after the DirectoryIndex specification, like this:

/etc/apache2/mods-enabled/dir.conf
<IfModule mod_dir.c>
    DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>

When you are finished, save and close the file by pressing CTRL+X. Confirm the save by typing Y and then hit ENTER to verify the file save location.

After this, restart the Apache web server in order for your changes to be recognized. Do this by typing this:

  • sudo systemctl restart apache2

You can also check on the status of the apache2 service using systemctl:

  • sudo systemctl status apache2
Sample Output
● apache2.service - LSB: Apache2 web server
   Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
  Drop-In: /lib/systemd/system/apache2.service.d
           └─apache2-systemd.conf
   Active: active (running) since Tue 2018-04-23 14:28:43 EDT; 45s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 13581 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
  Process: 13605 ExecStart=/etc/init.d/apache2 start (code=exited, status=0/SUCCESS)
    Tasks: 6 (limit: 512)
   CGroup: /system.slice/apache2.service
           ├─13623 /usr/sbin/apache2 -k start
           ├─13626 /usr/sbin/apache2 -k start
           ├─13627 /usr/sbin/apache2 -k start
           ├─13628 /usr/sbin/apache2 -k start
           ├─13629 /usr/sbin/apache2 -k start
           └─13630 /usr/sbin/apache2 -k start

Press Q to exit this status output.

To enhance the functionality of PHP, you have the option to install some additional modules. To see the available options for PHP modules and libraries, pipe the results of apt search into less, a pager which lets you scroll through the output of other commands:

  • apt search php- | less

Use the arrow keys to scroll up and down, and press Q to quit.

The results are all optional components that you can install. It will give you a short description for each:

bandwidthd-pgsql/bionic 2.0.1+cvs20090917-10ubuntu1 amd64
  Tracks usage of TCP/IP and builds html files with graphs

bluefish/bionic 2.2.10-1 amd64
  advanced Gtk+ text editor for web and software development

cacti/bionic 1.1.38+ds1-1 all
  web interface for graphing of monitoring systems

ganglia-webfrontend/bionic 3.6.1-3 all
  cluster monitoring toolkit - web front-end

golang-github-unknwon-cae-dev/bionic 0.0~git20160715.0.c6aac99-4 all
  PHP-like Compression and Archive Extensions in Go

haserl/bionic 0.9.35-2 amd64
  CGI scripting program for embedded environments

kdevelop-php-docs/bionic 5.2.1-1ubuntu2 all
  transitional package for kdevelop-php

kdevelop-php-docs-l10n/bionic 5.2.1-1ubuntu2 all
  transitional package for kdevelop-php-l10n
…
:

To learn more about what each module does, you could search the internet for more information about them. Alternatively, look at the long description of the package by typing:

  • apt show package_name

There will be a lot of output, with one field called Description which will have a longer explanation of the functionality that the module provides.

For example, to find out what the php-cli module does, you could type this:

  • apt show php-cli

Along with a large amount of other information, you'll find something that looks like this:

Output
…
Description: command-line interpreter for the PHP scripting language (default)
 This package provides the /usr/bin/php command interpreter, useful for
 testing PHP scripts from a shell or performing general shell scripting tasks.
 .
 PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
 open source general-purpose scripting language that is especially suited
 for web development and can be embedded into HTML.
 .
 This package is a dependency package, which depends on Ubuntu's default
 PHP version (currently 7.2).
…

If, after researching, you decide you would like to install a package, you can do so by using the apt install command like you have been doing for the other software.

If you decided that php-cli is something that you need, you could type:

  • sudo apt install php-cli

If you want to install more than one module, you can do that by listing each one, separated by a space, following the apt install command, like this:

  • sudo apt install package1 package2 ...

At this point, your LAMP stack is installed and configured. Before making any more changes or deploying an application, though, it would be helpful to proactively test out your PHP configuration in case there are any issues that should be addressed.

 

Step 4 — Testing PHP Processing on your Web Server

In order to test that your system is configured properly for PHP, create a very basic PHP script called info.php. In order for Apache to find this file and serve it correctly, it must be saved to a very specific directory, which is called the "web root".

In Ubuntu 18.04, this directory is located at /var/www/html/. Create the file at that location by running:

  • sudo nano /var/www/html/info.php

This will open a blank file. Add the following text, which is valid PHP code, inside the file:

info.php
<?php
phpinfo();
?>

When you are finished, save and close the file.

Now you can test whether your web server is able to correctly display content generated by this PHP script. To try this out, visit this page in your web browser. You'll need your server's public IP address again.

The address you will want to visit is:

http://your_server_ip/info.php

The page that you come to should look something like this:

Ubuntu 18.04 default PHP info

This page provides some basic information about your server from the perspective of PHP. It is useful for debugging and to ensure that your settings are being applied correctly.

If you can see this page in your browser, then your PHP is working as expected.

You probably want to remove this file after this test because it could actually give information about your server to unauthorized users. To do this, run the following command:

  • sudo rm /var/www/html/info.php

You can always recreate this page if you need to access the information again later.

 

Conclusion

Now that you have a LAMP stack installed, you have many choices for what to do next. Basically, you've installed a platform that will allow you to install most kinds of websites and web software on your server.

As an immediate next step, you should ensure that connections to your web server are secured, by serving them via HTTPS. The easiest option here is to use Let's Encrypt to secure your site with a free TLS/SSL certificate.

https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-ubuntu-18-04

Author: Angelo A Vitale
Last update: 2018-12-10 20:03


How To Install Webmin on Ubuntu 16.04/18.04

Introduction

Webmin is a web-based control panel for any Linux machine which lets you manage your server through a modern web-based interface. With Webmin, you can change settings for common packages on the fly, including web servers and databases, as well as manage users, groups, and software packages.

In this tutorial, you'll install and configure Webmin on your server and secure access to the interface with a valid certificate using Let's Encrypt and Apache. You'll then use Webmin to add new user accounts, and update all packages on your server from the dashboard.

 Prerequisites

To complete this tutorial, you will need:

 Step 1 — Installing Webmin

First, we need to add the Webmin repository so that we can easily install and update Webmin using our package manager. We do this by adding the repository to the /etc/apt/sources.list file.

Open the file in your editor:

  • sudo nano /etc/apt/sources.list

Then add this line to the bottom of the file to add the new repository:

/etc/apt/sources.list
 . . . 
deb http://download.webmin.com/download/repository sarge contrib

Save the file and exit the editor.

Next, add the Webmin PGP key so that your system will trust the new repository:

  • wget http://www.webmin.com/jcameron-key.asc
  • sudo apt-key add jcameron-key.asc

Next, update the list of packages to include the Webmin repository:

  • sudo apt update

Then install Webmin:

  • sudo apt install webmin

Once the installation finishes, you'll be presented with the following output:

Output
Webmin install complete. You can now login to 
https://your_server_ip:10000 as root with your 
root password, or as any user who can use ´sudo´.

Now, let's secure access to Webmin by putting it behind the Apache web server and adding a valid TLS/SSL certificate.

Step 2 — Securing Webmin with Apache and Let's Encrypt

To access Webmin, you have to specify port 10000 and ensure the port is open on your firewall. This is inconvenient, especially if you're accessing Webmin using an FQDN like webmin.your_domain We are going to use an Apache virtual host to proxy requests to Webmin's server running on port 10000. We'll then secure the virtual host using a TLS/SSL certificate from Let's Encrypt.

First, create a new Apache virtual host file in Apache's configuration directory:

  • sudo nano /etc/apache2/sites-available/your_domain.conf

Add the following to the file, replacing the email address and domain with your own:

/etc/apache2/sites-available/your_domain.conf
<VirtualHost *:80>
        ServerAdmin your_email
        ServerName your_domain
        ProxyPass / http://localhost:10000/
        ProxyPassReverse / http://localhost:10000/
</VirtualHost>

This configuration tells Apache to pass requests to http://localhost:10000, the Webmin server. It also ensures that internal links generated from Webmin will also pass through Apache.

Save the file and exit the editor.

Next, we need to tell Webmin to stop using TLS/SSL, as Apache will provide that for us going forward.

Open the file /etc/webmin/miniserv.conf in your editor:

  • sudo nano /etc/webmin/miniserv.conf

Find the following line:

/etc/webmin/miniserv.conf
...
ssl=1
...

Change the 1 to a 0 This will tell Webmin to stop using SSL.

Next we'll add our domain to the list of allowed domains, so that Webmin understands that when we access the panel from our domain, it's not something malicious, like a Cross-Site Scripting (XSS) attack.

Open the file /etc/webmin/config in your editor:

  • sudo nano /etc/webmin/config

Add the following line to the bottom of the file, replacing your_domain with your fully-qualified domain name.

/etc/webmin/config
 . . . 
referers=your_domain

Save the file and exit the editor.

Next, restart Webmin to apply the configuration changes:

  • sudo systemctl restart webmin

Then enable Apache's proxy_http module:

  • sudo a2enmod proxy_http

You'll see the following output:

Output
Considering dependency proxy for proxy_http:
Enabling module proxy.
Enabling module proxy_http.
To activate the new configuration, you need to run:
  systemctl restart apache2

The output suggests you restart Apache, but first, activate the new Apache virtual host you created:

  • sudo a2ensite your_domain

You'll see the following output indicating your site is enabled:

Output
Enabling site your_domain.
To activate the new configuration, you need to run:
  systemctl reload apache2

Now restart Apache completely to activate the proxy_http module and the new virtual host:

  • sudo systemctl restart apache2

Note: Ensure that you allow incoming traffic to your web server on port 80 and port 443 as shown in the prerequisite tutorial How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 18.04. You can do this with the command sudo ufw allow in "Apache Full".

Navigate to http://your_domain in your browser, and you will see the Webmin login page appear.

Warning: Do NOT log in to Webmin yet, as we haven't enabled SSL. If you log in now, your credentials will be sent to the server in clear text.

Now let's configure a certificate so that your connection is encrypted while using Webmin. In order to do this, we're going to use Let's Encrypt.

Tell Certbot to generate a TLS/SSL certificate for your domain and configure Apache to redirect traffic to the secure site:

  • sudo certbot --apache --email your_email -d your_domain --agree-tos --redirect --noninteractive

You'll see the following output:

Output
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for your_domain
Enabled Apache rewrite module
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/apache2/sites-available/your_domain-le-ssl.conf
Enabled Apache socache_shmcb module
Enabled Apache ssl module
Deploying Certificate to VirtualHost /etc/apache2/sites-available/your_domain-le-ssl.conf
Enabling available site: /etc/apache2/sites-available/your_domain-le-ssl.conf
Enabled Apache rewrite module
Redirecting vhost in /etc/apache2/sites-enabled/your_domain.conf to ssl vhost in /etc/apache2/sites-available/your_domain-le-ssl.conf

-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://your_domain

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=your_domain
-------------------------------------------------------------------------------

The output indicates that the certificate was installed and Apache is configured to redirect requests from http://your_domain to https://your_domain.

You've now set up a secured, working instance of Webmin. Let's look at how to use it.

Step 3 – Using Webmin

Webmin has modules that can control everything from the BIND DNS Server to something as simple as adding users to the system. Let's look at how to create a new user, and then explore how to update software packages using Webmin.

In order to log in to Webmin, navigate to http://your_domain and sign in with either the root user or a user with sudo privileges.

Managing Users and Groups

Let's manage the users and groups on the server.

First, click the System tab, and then click the Users and Groups button. From here you can either add a user, manage a user, or add or manage a group.

Let's create a new user called deploy which could be used for hosting web applications. To add a user, click Create a new user, which is located at the top of the users table. This displays the Create Userscreen, where you can supply the username, password, groups and other options. Follow these instructions to create the user:

  1. Fill in Username with deploy.
  2. Select Automatic for User ID.
  3. Fill in Real Name with a descriptive name like Deployment user.
  4. For Home Directory, select Automatic.
  5. For Shell, select /bin/bash from the dropdown list.
  6. For Password, select Normal Password and type in a password of your choice.
  7. For Primary Group, select New group with same name as user.
  8. For Secondary Group, select sudo from the All groups list, and press the -> button to add the group to the in groups list.
  9. Press Create to create this new user.

When creating a user, you can set options for password expiry, the user's shell, or whether they are allowed a home directory.

Next, let's look at how to install updates to our system.

Updating Packages

Webmin lets you update all of your packages through its user interface. To update all of your packages,, click the Dashboard link, and then locate the Package updates field. If there are updates available, you'll see a link that states the number of available updates, as shown in the following figure:

Webmin shows the number of package updates available

Click this link, and then press Update selected packages to start the update. You may be asked to reboot the server, which you can also do through the Webmin interface.

Conclusion

You now have a secured, working instance of Webmin and you've used the interface to create a user and update packages. Webmin gives you access to many things you'd normally need to access through the console, and it organizes them in an intuitive way. For example, if you have Apache installed, you would find the configuration tab for it under Servers, and then Apache.

Explore the interface further, or check out the Official Webmin wiki to learn more about managing your system with Webmin.

https://www.digitalocean.com/community/tutorials/how-to-install-webmin-on-ubuntu-18-04

Author: Angelo A Vitale
Last update: 2018-12-25 10:59


How To Install WordPress with LAMP on Ubuntu 16.04

How To Install WordPress with LAMP on Ubuntu 16.04

Introduction

WordPress is the most popular CMS (content management system) on the internet. It allows you to easily set up flexible blogs and websites on top of a MySQL backend with PHP processing. WordPress has seen incredible adoption and is a great choice for getting a website up and running quickly. After setup, almost all administration can be done through the web frontend.

In this guide, we'll focus on getting a WordPress instance set up on a LAMP stack (Linux, Apache, MySQL, and PHP) on an Ubuntu 16.04 server.

 Prerequisites

In order to complete this tutorial, you will need access to an Ubuntu 16.04 server.

You will need to perform the following tasks before you can start this guide:

  • Create a sudo user on your server: We will be completing the steps in this guide using a non-root user with sudo privileges. You can create a user with sudo privileges by following our Ubuntu 16.04 initial server setup guide.
  • Install a LAMP stack: WordPress will need a web server, a database, and PHP in order to correctly function. Setting up a LAMP stack (Linux, Apache, MySQL, and PHP) fulfills all of these requirements. Follow this guide to install and configure this software.
  • Secure your site with SSL: WordPress serves dynamic content and handles user authentication and authorization. TLS/SSL is the technology that allows you to encrypt the traffic from your site so that your connection is secure. The way you set up SSL will depend on whether you have a domain name for your site.
    • If you have a domain name... the easiest way to secure your site is with Let's Encrypt, which provides free, trusted certificates. Follow our Let's Encrypt guide for Apache to set this up.
    • If you do not have a domain... and you are just using this configuration for testing or personal use, you can use a self-signed certificate instead. This provides the same type of encryption, but without the domain validation. Follow our self-signed SSL guide for Apache to get set up.

When you are finished the setup steps, log into your server as your sudo user and continue below.

 Step 1: Create a MySQL Database and User for WordPress

The first step that we will take is a preparatory one. WordPress uses MySQL to manage and store site and user information. We have MySQL installed already, but we need to make a database and a user for WordPress to use.

To get started, log into the MySQL root (administrative) account by issuing this command:

  • mysql -u root -p

You will be prompted for the password you set for the MySQL root account when you installed the software.

First, we can create a separate database that WordPress can control. You can call this whatever you would like, but we will be using wordpress in this guide to keep it simple. You can create the database for WordPress by typing:

  • CREATE DATABASE wordpress DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;

Note: Every MySQL statement must end in a semi-colon (;). Check to make sure this is present if you are running into any issues.

Next, we are going to create a separate MySQL user account that we will use exclusively to operate on our new database. Creating one-function databases and accounts is a good idea from a management and security standpoint. We will use the name wordpressuser in this guide. Feel free to change this if you'd like.

We are going to create this account, set a password, and grant access to the database we created. We can do this by typing the following command. Remember to choose a strong password here for your database user:

  • GRANT ALL ON wordpress.* TO 'wordpressuser'@'localhost' IDENTIFIED BY 'password';

You now have a database and user account, each made specifically for WordPress. We need to flush the privileges so that the current instance of MySQL knows about the recent changes we've made:

  • FLUSH PRIVILEGES;

Exit out of MySQL by typing:

  • EXIT;
 Step 2: Install Additional PHP Extensions

When setting up our LAMP stack, we only required a very minimal set of extensions in order to get PHP to communicate with MySQL. WordPress and many of its plugins leverage additional PHP extensions.

We can download and install some of the most popular PHP extensions for use with WordPress by typing:

  • sudo apt-get update
  • sudo apt-get install php-curl php-gd php-mbstring php-mcrypt php-xml php-xmlrpc
Note

Each WordPress plugin has its own set of requirements. Some may require additional PHP packages to be installed. Check your plugin documentation to discover its PHP requirements. If they are available, they can be installed with apt-get as demonstrated above.

We will restart Apache to leverage these new extensions in the next section. If you are returning here to install additional plugins, you can restart Apache now by typing:

  • sudo systemctl restart apache2

Step 3: Adjust Apache's Configuration to Allow for .htaccess Overrides and Rewrites

Next, we will be making a few minor adjustments to our Apache configuration. Currently, the use of .htaccess files is disabled. WordPress and many WordPress plugins use these files extensively for in-directory tweaks to the web server's behavior.

Additionally, we will enable mod_rewrite, which will be needed in order to get WordPress permalinks to function correctly.

Enable .htaccess Overrides

Open the primary Apache configuration file to make our first change:

  • sudo nano /etc/apache2/apache2.conf

To allow .htaccess files, we need to set the AllowOverride directive within a Directory block pointing to our document root. Towards the bottom of the file, add the following block:

/etc/apache2/apache2.conf
. . .

<Directory /var/www/html/>
    AllowOverride All
</Directory>

. . .

When you are finished, save and close the file.

Enable the Rewrite Module

Next, we can enable mod_rewrite so that we can utilize the WordPress permalink feature:

  • sudo a2enmod rewrite

Enable the Changes

Before we implement the changes we've made, check to make sure we haven't made any syntax errors:

  • sudo apache2ctl configtest

The output might have a message that looks like this:

Output
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
Syntax OK

If you wish to suppress the top line, just add a ServerName directive to the /etc/apache2/apache2.conffile pointing to your server's domain or IP address. This is just a message however and doesn't affect the functionality of our site. As long as the output contains Syntax OK, you are ready to continue.

Restart Apache to implement the changes:

  • sudo systemctl restart apache2

Step 4: Download WordPress

Now that our server software is configured, we can download and set up WordPress. For security reasons in particular, it is always recommended to get the latest version of WordPress from their site.

Change into a writable directory and then download the compressed release by typing:

  • cd /tmp
  • curl -O https://wordpress.org/latest.tar.gz

Extract the compressed file to create the WordPress directory structure:

  • tar xzvf latest.tar.gz

We will be moving these files into our document root momentarily. Before we do, we can add a dummy .htaccess file and set its permissions so that this will be available for WordPress to use later.

Create the file and set the permissions by typing:

  • touch /tmp/wordpress/.htaccess
  • chmod 660 /tmp/wordpress/.htaccess

We'll also copy over the sample configuration file to the filename that WordPress actually reads:

  • cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php

We can also create the upgrade directory, so that WordPress won't run into permissions issues when trying to do this on its own following an update to its software:

  • mkdir /tmp/wordpress/wp-content/upgrade

Now, we can copy the entire contents of the directory into our document root. We are using the -a flag to make sure our permissions are maintained. We are using a dot at the end of our source directory to indicate that everything within the directory should be copied, including hidden files (like the .htaccessfile we created):

  • sudo cp -a /tmp/wordpress/. /var/www/html

Step 5: Configure the WordPress Directory

Before we do the web-based WordPress setup, we need to adjust some items in our WordPress directory.

Adjusting the Ownership and Permissions

One of the big things we need to accomplish is setting up reasonable file permissions and ownership. We need to be able to write to these files as a regular user, and we need the web server to also be able to access and adjust certain files and directories in order to function correctly.

We'll start by assigning ownership over all of the files in our document root to our username. We will use sammy as our username in this guide, but you should change this to match whatever your sudo user is called. We will assign group ownership to the www-data group:

  • sudo chown -R sammy:www-data /var/www/html

Next, we will set the setgid bit on each of the directories within the document root. This causes new files created within these directories to inherit the group of the parent directory (which we just set to www-data) instead of the creating user's primary group. This just makes sure that whenever we create a file in the directory on the command line, the web server will still have group ownership over it.

We can set the setgid bit on every directory in our WordPress installation by typing:

  • sudo find /var/www/html -type d -exec chmod g+s {} \;

There are a few other fine-grained permissions we'll adjust. First, we'll give group write access to the wp-content directory so that the web interface can make theme and plugin changes:

  • sudo chmod g+w /var/www/html/wp-content

As part of this process, we will give the web server write access to all of the content in these two directories:

  • sudo chmod -R g+w /var/www/html/wp-content/themes
  • sudo chmod -R g+w /var/www/html/wp-content/plugins

This should be a reasonable permissions set to start with. Some plugins and procedures might require additional tweaks.

Setting up the WordPress Configuration File

Now, we need to make some changes to the main WordPress configuration file.

When we open the file, our first order of business will be to adjust some secret keys to provide some security for our installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won't hurt usability to have complex, secure values here.

To grab secure values from the WordPress secret key generator, type:

  • curl -s https://api.wordpress.org/secret-key/1.1/salt/

You will get back unique values that look something like this:

Warning! It is important that you request unique values each time. Do NOT copy the values shown below!

Output
define('AUTH_KEY',         '1jl/vqfs<XhdXoAPz9 DO NOT COPY THESE VALUES c_j{iwqD^<+c9.k<J@4H');
define('SECURE_AUTH_KEY',  'E2N-h2]Dcvp+aS/p7X DO NOT COPY THESE VALUES {Ka(f;rv?Pxf})CgLi-3');
define('LOGGED_IN_KEY',    'W(50,{W^,OPB%PB<JF DO NOT COPY THESE VALUES 2;y&,2m%3]R6DUth[;88');
define('NONCE_KEY',        'll,4UC)7ua+8<!4VM+ DO NOT COPY THESE VALUES #´DXF+[$atzM7 o^-C7g');
define('AUTH_SALT',        'koMrurzOA+|L_lG}kf DO NOT COPY THESE VALUES  07VC*Lj*lD&?3w!BT#-');
define('SECURE_AUTH_SALT', 'p32*p,]z%LZ+pAu:VY DO NOT COPY THESE VALUES C-?y+K0DK_+F|0h{!_xY');
define('LOGGED_IN_SALT',   'i^/G2W7!-1H2OQ+t$3 DO NOT COPY THESE VALUES t6**bRVFSD[Hi])-qS´|');
define('NONCE_SALT',       'Q6]U:K?j4L%Z]}h^q7 DO NOT COPY THESE VALUES 1% ^qUswWgn+6&xqHN&%');

These are configuration lines that we can paste directly in our configuration file to set secure keys. Copy the output you received now.

Now, open the WordPress configuration file:

  • nano /var/www/html/wp-config.php

Find the section that contains the dummy values for those settings. It will look something like this:

/var/www/html/wp-config.php
. . .

define('AUTH_KEY',         'put your unique phrase here');
define('SECURE_AUTH_KEY',  'put your unique phrase here');
define('LOGGED_IN_KEY',    'put your unique phrase here');
define('NONCE_KEY',        'put your unique phrase here');
define('AUTH_SALT',        'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT',   'put your unique phrase here');
define('NONCE_SALT',       'put your unique phrase here');

. . .

Delete those lines and paste in the values you copied from the command line:

/var/www/html/wp-config.php
. . .

define('AUTH_KEY',         'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_KEY',  'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_KEY',    'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_KEY',        'VALUES COPIED FROM THE COMMAND LINE');
define('AUTH_SALT',        'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_SALT',   'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_SALT',       'VALUES COPIED FROM THE COMMAND LINE');

. . .

Next, we need to modify some of the database connection settings at the beginning of the file. You need to adjust the database name, the database user, and the associated password that we configured within MySQL.

The other change we need to make is to set the method that WordPress should use to write to the filesystem. Since we've given the web server permission to write where it needs to, we can explicitly set the filesystem method to "direct". Failure to set this with our current settings would result in WordPress prompting for FTP credentials when we perform some actions.

This setting can be added below the database connection settings, or anywhere else in the file:

/var/www/html/wp-config.php
. . .

define('DB_NAME', 'wordpress');

/** MySQL database username */
define('DB_USER', 'wordpressuser');

/** MySQL database password */
define('DB_PASSWORD', 'password');

. . .

define('FS_METHOD', 'direct');

Save and close the file when you are finished.

 Step 6: Complete the Installation Through the Web Interface

Now that the server configuration is complete, we can complete the installation through the web interface.

In your web browser, navigate to your server's domain name or public IP address:

http://server_domain_or_IP

Select the language you would like to use:

WordPress language selection

Next, you will come to the main setup page.

Select a name for your WordPress site and choose a username (it is recommended not to choose something like "admin" for security purposes). A strong password is generated automatically. Save this password or select an alternative strong password.

Enter your email address and select whether you want to discourage search engines from indexing your site:

WordPress setup installation

When you click ahead, you will be taken to a page that prompts you to log in:

WordPress login prompt

Once you log in, you will be taken to the WordPress administration dashboard:

WordPress login prompt

 Upgrading WordPress

As WordPress upgrades become available, you will be unable in install them through the interface with the current permissions.

The permissions we selected here are meant to provide a good balance between security and usability for the 99% of times between upgrading. However, they are a bit too restrictive for the software to automatically apply updates.

When an update becomes available, log back into your server as your sudo user. Temporarily give the web server process access to the whole document root:

  • sudo chown -R www-data /var/www/html

Now, go back the WordPress administration panel and apply the update.

When you are finished, lock the permissions down again for security:

  • sudo chown -R sammy /var/www/html

This should only be necessary when applying upgrades to WordPress itself.

Conclusion

WordPress should be installed and ready to use! Some common next steps are to choose the permalinks setting for your posts (can be found in Settings > Permalinks) or to select a new theme (in Appearance > Themes). If this is your first time using WordPress, explore the interface a bit to get acquainted with your new CMS.

https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-lamp-on-ubuntu-16-04

Author: Angelo A Vitale
Last update: 2018-12-26 20:48


How To Set Up Apache Virtual Hosts on Ubuntu 14.04 LTS/16.04

How To Set Up Apache Virtual Hosts on Ubuntu 14.04 LTS

Introduction

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a virtual host.

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

Each domain that is configured will direct the visitor to a specific directory holding that site's information, never indicating that the same server is also responsible for other sites. This scheme is expandable without any software limit as long as your server can handle the load.

In this guide, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you'll learn how to serve different content to different visitors depending on which domains they are requesting.

 

Prerequisites

Before you begin this tutorial, you should create a non-root user as described in steps 1-4 here.

You will also need to have Apache installed in order to work through these steps. If you haven't already done so, you can get Apache installed on your server through apt-get:

sudo apt-get update
sudo apt-get install apache2

After these steps are complete, we can get started.

For the purposes of this guide, my configuration will make a virtual host for example.com and another for test.com. These will be referenced throughout the guide, but you should substitute your own domains or values while following along.

To learn how to set up your domain names with DigitalOcean, follow this link. If you do not have domains available to play with, you can use dummy values.

We will show how to edit your local hosts file later on to test the configuration if you are using dummy values. This will allow you to test your configuration from your home computer, even though your content won't be available through the domain name to other visitors.

 

Step One — Create the Directory Structure

The first step that we are going to take is to make a directory structure that will hold the site data that we will be serving to visitors.

Our document root (the top-level directory that Apache looks at to find content to serve) will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

For instance, for our sites, we're going to make our directories like this:

sudo mkdir -p /var/www/example.com/public_html
sudo mkdir -p /var/www/test.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

 

Step Two — Grant Permissions

Now we have the directory structure for our files, but they are owned by our root user. If we want our regular user to be able to modify files in our web directories, we can change the ownership by doing this:

sudo chown -R $USER:$USER /var/www/example.com/public_html
sudo chown -R $USER:$USER /var/www/test.com/public_html

The $USER variable will take the value of the user you are currently logged in as when you press "ENTER". By doing this, our regular user now owns the public_html subdirectories where we will be storing our content.

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders it contains so that pages can be served correctly:

sudo chmod -R 755 /var/www

Your web server should now have the permissions it needs to serve content, and your user should be able to create content within the necessary folders.

 

Step Three — Create Demo Pages for Each Virtual Host

We have our directory structure in place. Let's create some content to serve.

We're just going for a demonstration, so our pages will be very simple. We're just going to make an index.html page for each site.

Let's start with example.com. We can open up an index.html file in our editor by typing:

nano /var/www/example.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to. My file looks like this:

<html>
  <head>
    <title>Welcome to Example.com!</title>
  </head>
  <body>
    <h1>Success!  The example.com virtual host is working!</h1>
  </body>
</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing:

cp /var/www/example.com/public_html/index.html /var/www/test.com/public_html/index.html

We can then open the file and modify the relevant pieces of information:

nano /var/www/test.com/public_html/index.html
<html>
  <head>
    <title>Welcome to Test.com!</title>
  </head>
  <body>
    <h1>Success!  The test.com virtual host is working!</h1>
  </body>
</html>

Save and close this file as well. You now have the pages necessary to test the virtual host configuration.

 

Step Four — Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf that we can use as a jumping off point. We are going to copy it over to create a virtual host file for each of our domains.

We will start with one domain, configure it, copy it for our second domain, and then make the few further adjustments needed. The default Ubuntu configuration requires that each virtual host file end in .conf.

Create the First Virtual Host File

Start by copying the file for the first domain:

sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf

Open the new file in your editor with root privileges:

sudo nano /etc/apache2/sites-available/example.com.conf

The file will look something like this (I've removed the comments here to make the file more approachable):

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

As you can see, there's not much here. We will customize the items here for our first domain and add some additional directives. This virtual host section matches any requests that are made on port 80, the default HTTP port.

First, we need to change the ServerAdmin directive to an email that the site administrator can receive emails through.

ServerAdmin admin@example.com

After this, we need to add two directives. The first, called ServerName, establishes the base domain that should match for this virtual host definition. This will most likely be your domain. The second, called ServerAlias, defines further names that should match as if they were the base name. This is useful for matching hosts you defined, like www:

ServerName example.com
ServerAlias www.example.com

The only other thing we need to change for a basic virtual host file is the location of the document root for this domain. We already created the directory we need, so we just need to alter the DocumentRootdirective to reflect the directory we created:

DocumentRoot /var/www/example.com/public_html

In total, our virtualhost file should look like this:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot /var/www/example.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Save and close the file.

Copy First Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying it:

sudo cp /etc/apache2/sites-available/example.com.conf /etc/apache2/sites-available/test.com.conf

Open the new file with root privileges in your editor:

sudo nano /etc/apache2/sites-available/test.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this:

<VirtualHost *:80>
    ServerAdmin admin@test.com
    ServerName test.com
    ServerAlias www.test.com
    DocumentRoot /var/www/test.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Save and close the file when you are finished.

 

Step Five — Enable the New Virtual Host Files

Now that we have created our virtual host files, we must enable them. Apache includes some tools that allow us to do this.

We can use the a2ensite tool to enable each of our sites like this:

sudo a2ensite example.com.conf
sudo a2ensite test.com.conf

When you are finished, you need to restart Apache to make these changes take effect:

sudo service apache2 restart

You will most likely receive a message saying something similar to:

 * Restarting web server apache2
 AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message

This is a harmless message that does not affect our site.

 

Step Six — Set Up Local Hosts File (Optional)

If you haven't been using actual domain names that you own to test this procedure and have been using some example domains instead, you can at least test the functionality of this process by temporarily modifying the hosts file on your local computer.

This will intercept any requests for the domains that you configured and point them to your VPS server, just as the DNS system would do if you were using registered domains. This will only work from your computer though, and is simply useful for testing purposes.

Make sure you are operating on your local computer for these steps and not your VPS server. You will need to know the computer's administrative password or otherwise be a member of the administrative group.

If you are on a Mac or Linux computer, edit your local file with administrative privileges by typing:

sudo nano /etc/hosts

If you are on a Windows machine, you can find instructions on altering your hosts file here.

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

For the domains that I used in this guide, assuming that my VPS IP address is 111.111.111.111, I could add the following lines to the bottom of my hosts file:

127.0.0.1   localhost
127.0.1.1   guest-desktop
111.111.111.111 example.com
111.111.111.111 test.com

This will direct any requests for example.com and test.com on our computer and send them to our server at 111.111.111.111. This is what we want if we are not actually the owners of these domains in order to test our virtual hosts.

Save and close the file.

 

Step Seven — Test your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser:

http://example.com

You should see a page that looks like this:

Apache virt host example

Likewise, if you can visit your second page:

http://test.com

You will see the file you created for your second site:

Apache virt host test

If both of these sites work well, you've successfully configured two virtual hosts on the same server.

If you adjusted your home computer's hosts file, you may want to delete the lines you added now that you verified that your configuration works. This will prevent your hosts file from being filled with entries that are not actually necessary.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

 

Conclusion

If you followed along, you should now have a single server handling two separate domain names. You can expand this process by following the steps we outlined above to make additional virtual hosts.

There is no software limit on the number of domain names Apache can handle, so feel free to make as many as your server is capable of handling.

Author: Angelo A Vitale
Last update: 2018-12-10 19:52


How To Install and Secure phpMyAdmin on Ubuntu 16.04

How To Install and Secure phpMyAdmin on Ubuntu 16.04

Introduction

While many users need the functionality of a database management system like MySQL, they may not feel comfortable interacting with the system solely from the MySQL prompt.

phpMyAdmin was created so that users can interact with MySQL through a web interface. In this guide, we'll discuss how to install and secure phpMyAdmin so that you can safely use it to manage your databases from an Ubuntu 16.04 system.

 

Prerequisites

Before you get started with this guide, you need to have some basic steps completed.

First, we'll assume that you are using a non-root user with sudo privileges, as described in steps 1-4 in the initial server setup of Ubuntu 16.04.

We're also going to assume that you've completed a LAMP (Linux, Apache, MySQL, and PHP) installation on your Ubuntu 16.04 server. If this is not completed yet, you can follow this guide on installing a LAMP stack on Ubuntu 16.04.

Finally, there are important security considerations when using software like phpMyAdmin, since it:

  • Communicates directly with your MySQL installation
  • Handles authentication using MySQL credentials
  • Executes and returns results for arbitrary SQL queries

For these reasons, and because it is a widely-deployed PHP application which is frequently targeted for attack, you should never run phpMyAdmin on remote systems over a plain HTTP connection. If you do not have an existing domain configured with an SSL/TLS certificate, you can follow this guide on securing Apache with Let's Encrypt on Ubuntu 16.04.

Once you are finished with these steps, you're ready to get started with this guide.

 

Step One — Install phpMyAdmin

To get started, we will install phpMyAdmin from the default Ubuntu repositories.

We can do this by updating our local package index and then using the apt packaging system to pull down the files and install them on our system:

  • sudo apt-get update
  • sudo apt-get install phpmyadmin php-mbstring php-gettext

This will ask you a few questions in order to configure your installation correctly.

Warning: When the first prompt appears, apache2 is highlighted, but not selected. If you do not hit Space to select Apache, the installer will not move the necessary files during installation. Hit SpaceTab, and then Enter to select Apache.

  • For the server selection, choose apache2.
  • Select yes when asked whether to use dbconfig-common to set up the database
  • You will be prompted for your database administrator's password
  • You will then be asked to choose and confirm a password for the phpMyAdmin application itself

The installation process actually adds the phpMyAdmin Apache configuration file into the /etc/apache2/conf-enabled/ directory, where it is automatically read.

The only thing we need to do is explicitly enable the PHP mcrypt and mbstring extensions, which we can do by typing:

  • sudo phpenmod mcrypt
  • sudo phpenmod mbstring

Afterwards, you'll need to restart Apache for your changes to be recognized:

  • sudo systemctl restart apache2

You can now access the web interface by visiting your server's domain name or public IP address followed by /phpmyadmin:

https://domain_name_or_IP/phpmyadmin

phpMyAdmin login screen

You can now log into the interface using the root username and the administrative password you set up during the MySQL installation.

When you log in, you'll see the user interface, which will look something like this:

phpMyAdmin user interface

 

Step Two — Secure your phpMyAdmin Instance

We were able to get our phpMyAdmin interface up and running fairly easily. However, we are not done yet. Because of its ubiquity, phpMyAdmin is a popular target for attackers. We should take extra steps to prevent unauthorized access.

One of the easiest way of doing this is to place a gateway in front of the entire application. We can do this using Apache's built-in .htaccess authentication and authorization functionalities.

Configure Apache to Allow .htaccess Overrides

First, we need to enable the use of .htaccess file overrides by editing our Apache configuration file.

We will edit the linked file that has been placed in our Apache configuration directory:

  • sudo nano /etc/apache2/conf-available/phpmyadmin.conf

We need to add an AllowOverride All directive within the <Directory /usr/share/phpmyadmin>section of the configuration file, like this:

/etc/apache2/conf-available/phpmyadmin.conf
<Directory /usr/share/phpmyadmin>
    Options FollowSymLinks
    DirectoryIndex index.php
    AllowOverride All
</Directory>

When you have added this line, save and close the file.

To implement the changes you made, restart Apache:

  • sudo systemctl restart apache2

Create an .htaccess File

Now that we have enabled .htaccess use for our application, we need to create one to actually implement some security.

In order for this to be successful, the file must be created within the application directory. We can create the necessary file and open it in our text editor with root privileges by typing:

  • sudo nano /usr/share/phpmyadmin/.htaccess

Within this file, we need to enter the following information:

/usr/share/phpmyadmin/.htaccess
AuthType Basic
AuthName "Restricted Files"
AuthUserFile /etc/phpmyadmin/.htpasswd
Require valid-user

Let's go over what each of these lines mean:

  • AuthType Basic: This line specifies the authentication type that we are implementing. This type will implement password authentication using a password file.
  • AuthName: This sets the message for the authentication dialog box. You should keep this generic so that unauthorized users won't gain any information about what is being protected.
  • AuthUserFile: This sets the location of the password file that will be used for authentication. This should be outside of the directories that are being served. We will create this file shortly.
  • Require valid-user: This specifies that only authenticated users should be given access to this resource. This is what actually stops unauthorized users from entering.

When you are finished, save and close the file.

Create the .htpasswd file for Authentication

The location that we selected for our password file was "/etc/phpmyadmin/.htpasswd". We can now create this file and pass it an initial user with the htpasswd utility:

  • sudo htpasswd -c /etc/phpmyadmin/.htpasswd (Add username)

You will be prompted to select and confirm a password for the user you are creating. Afterwards, the file is created with the hashed password that you entered.

If you want to enter an additional user, you need to do so without the -c flag, like this:

  • sudo htpasswd /etc/phpmyadmin/.htpasswd (Add additionaluser)

Now, when you access your phpMyAdmin subdirectory, you will be prompted for the additional account name and password that you just configured:

https://domain_name_or_IP/phpmyadmin

phpMyAdmin apache password

After entering the Apache authentication, you'll be taken to the regular phpMyAdmin authentication page to enter your other credentials. This will add an additional layer of security since phpMyAdmin has suffered from vulnerabilities in the past.

Conclusion

You should now have phpMyAdmin configured and ready to use on your Ubuntu 16.04 server. Using this interface, you can easily create databases, users, tables, etc., and perform the usual operations like deleting and modifying structures and data.

https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-phpmyadmin-on-ubuntu-16-04

Author: Angelo A Vitale
Last update: 2018-12-15 10:39


How To Secure Apache with Let's Encrypt on Ubuntu 16.04

How To Secure Apache with Let's Encrypt on Ubuntu 16.04

Introduction

This tutorial will show you how to set up a TLS/SSL certificate from Let’s Encrypt on an Ubuntu 16.04 server running Apache as a web server.

SSL certificates are used within web servers to encrypt the traffic between the server and client, providing extra security for users accessing your application. Let’s Encrypt provides an easy way to obtain and install trusted certificates for free.

Prerequisites

In order to complete this guide, you will need:

  • An Ubuntu 16.04 server with a non-root sudo-enabled user, which you can set up by following our Initial Server Setup guide
  • The Apache web server installed with one or more domain names properly configured through Virtual Hosts that specify ServerName.

When you are ready to move on, log into your server using your sudo-enabled account.

Step 1 — Install the Let's Encrypt Client

Let's Encrypt certificates are fetched via client software running on your server. The official client is called Certbot, and its developers maintain their own Ubuntu software repository with up-to-date versions. Because Certbot is in such active development it's worth using this repository to install a newer version than Ubuntu provides by default.

First, add the repository:

  • sudo add-apt-repository ppa:certbot/certbot

You'll need to press ENTER to accept. Afterwards, update the package list to pick up the new repository's package information:

  • sudo apt-get update

And finally, install Certbot from the new repository with apt-get:

  • sudo apt-get install python-certbot-apache

The certbot Let's Encrypt client is now ready to use.

Step 2 — Set Up the SSL Certificate

Generating the SSL certificate for Apache using Certbot is quite straightforward. The client will automatically obtain and install a new SSL certificate that is valid for the domains provided as parameters.

To execute the interactive installation and obtain a certificate that covers only a single domain, run the certbot command like so, where example.com is your domain:

  • sudo certbot --apache -d example.com

If you want to install a single certificate that is valid for multiple domains or subdomains, you can pass them as additional parameters to the command. The first domain name in the list of parameters will be the base domain used by Let’s Encrypt to create the certificate, and for that reason we recommend that you pass the bare top-level domain name as first in the list, followed by any additional subdomains or aliases:

  • sudo certbot --apache -d example.com -d www.example.com

For this example, the base domain will be example.com.

If you have multiple virtual hosts, you should run certbot once for each to generate a new certificate for each. You can distribute multiple domains and subdomains across your virtual hosts in any way.

After the dependencies are installed, you will be presented with a step-by-step guide to customize your certificate options. You will be asked to provide an email address for lost key recovery and notices, and you will be able to choose between enabling both http and https access or forcing all requests to redirect to https. It is usually safest to require https, unless you have a specific need for unencrypted http traffic.

When the installation is finished, you should be able to find the generated certificate files at /etc/letsencrypt/live. You can verify the status of your SSL certificate with the following link (don’t forget to replace example.com with your base domain):

https://www.ssllabs.com/ssltest/analyze.html?d=example.com&latest

You should now be able to access your website using a https prefix.

Step 3 — Verifying Certbot Auto-Renewal

Let’s Encrypt certificates only last for 90 days. However, the certbot package we installed takes care of this for us by running certbot renew twice a day via a systemd timer. On non-systemd distributions this functionality is provided by a cron script placed in /etc/cron.d. The task runs twice daily and will renew any certificate that's within thirty days of expiration.

To test the renewal process, you can do a dry run with certbot:

  • sudo certbot renew --dry-run

If you see no errors, you're all set. When necessary, Certbot will renew your certificates and reload Apache to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.

Conclusion

In this guide, we saw how to install a free SSL certificate from Let’s Encrypt in order to secure a website hosted with Apache. We recommend that you check the official Let’s Encrypt blog for important updates from time to time, and read the Certbot documentation for more details about the Certbot client.

https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-16-04

Author: Angelo A Vitale
Last update: 2018-12-26 23:02


How To Set Up Apache Virtual Hosts on Ubuntu 16.04

How To Set Up Apache Virtual Hosts on Ubuntu 16.04

Introduction

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a virtual host.

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

Each domain that is configured will direct the visitor to a specific directory holding that site's information, never indicating that the same server is also responsible for other sites. This scheme is expandable without any software limit as long as your server can handle the load.

In this guide, we will walk you through how to set up Apache virtual hosts on an Ubuntu 16.04 VPS. During this process, you'll learn how to serve different content to different visitors depending on which domains they are requesting.

 

Prerequisites

Before you begin this tutorial, you should create a non-root user as described in steps 1-4 here.

You will also need to have Apache installed in order to work through these steps. If you haven't already done so, you can get Apache installed on your server through apt-get:

  • sudo apt-get update
  • sudo apt-get install apache2

After these steps are complete, we can get started.

For the purposes of this guide, our configuration will make a virtual host for example.com and another for test.com. These will be referenced throughout the guide, but you should substitute your own domains or values while following along.

To learn how to set up your domain names with DigitalOcean, follow this link. If you do not have domains available to play with, you can use dummy values.

We will show how to edit your local hosts file later on to test the configuration if you are using dummy values. This will allow you to test your configuration from your home computer, even though your content won't be available through the domain name to other visitors.

 

Step One — Create the Directory Structure

The first step that we are going to take is to make a directory structure that will hold the site data that we will be serving to visitors.

Our document root (the top-level directory that Apache looks at to find content to serve) will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

For instance, for our sites, we're going to make our directories like this:

  • sudo mkdir -p /var/www/example.com/public_html
  • sudo mkdir -p /var/www/test.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

 

Step Two — Grant Permissions

Now we have the directory structure for our files, but they are owned by our root user. If we want our regular user to be able to modify files in our web directories, we can change the ownership by doing this:

  • sudo chown -R $USER:$USER /var/www/example.com/public_html
  • sudo chown -R $USER:$USER /var/www/test.com/public_html

The $USER variable will take the value of the user you are currently logged in as when you press Enter. By doing this, our regular user now owns the public_html subdirectories where we will be storing our content.

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders it contains so that pages can be served correctly:

  • sudo chmod -R 755 /var/www

Your web server should now have the permissions it needs to serve content, and your user should be able to create content within the necessary folders.

 

Step Three — Create Demo Pages for Each Virtual Host

We have our directory structure in place. Let's create some content to serve.

We're just going for a demonstration, so our pages will be very simple. We're just going to make an index.html page for each site.

Let's start with example.com. We can open up an index.html file in our editor by typing:

  • nano /var/www/example.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to. My file looks like this:

/var/www/example.com/public_html/index.html
<html>
  <head>
    <title>Welcome to Example.com!</title>
  </head>
  <body>
    <h1>Success!  The example.com virtual host is working!</h1>
  </body>
</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing:

  • cp /var/www/example.com/public_html/index.html /var/www/test.com/public_html/index.html

We can then open the file and modify the relevant pieces of information:

  • nano /var/www/test.com/public_html/index.html
/var/www/test.com/public_html/index.html
<html>
  <head>
    <title>Welcome to Test.com!</title>
  </head>
  <body> <h1>Success!  The test.com virtual host is working!</h1>
  </body>
</html>

Save and close this file as well. You now have the pages necessary to test the virtual host configuration.

 

Step Four — Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf that we can use as a jumping off point. We are going to copy it over to create a virtual host file for each of our domains.

We will start with one domain, configure it, copy it for our second domain, and then make the few further adjustments needed. The default Ubuntu configuration requires that each virtual host file end in .conf.

Create the First Virtual Host File

Start by copying the file for the first domain:

  • sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf

Open the new file in your editor with root privileges:

  • sudo nano /etc/apache2/sites-available/example.com.conf

The file will look something like this (I've removed the comments here to make the file more approachable):

/etc/apache2/sites-available/example.com.conf
<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

As you can see, there's not much here. We will customize the items here for our first domain and add some additional directives. This virtual host section matches any requests that are made on port 80, the default HTTP port.

First, we need to change the ServerAdmin directive to an email that the site administrator can receive emails through.

ServerAdmin admin@example.com

After this, we need to add two directives. The first, called ServerName, establishes the base domain that should match for this virtual host definition. This will most likely be your domain. The second, called ServerAlias, defines further names that should match as if they were the base name. This is useful for matching hosts you defined, like www:

ServerName example.com
ServerAlias www.example.com

The only other thing we need to change for a basic virtual host file is the location of the document root for this domain. We already created the directory we need, so we just need to alter the DocumentRootdirective to reflect the directory we created:

DocumentRoot /var/www/example.com/public_html

In total, our virtualhost file should look like this:

/etc/apache2/sites-available/example.com.conf
<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot /var/www/example.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Save and close the file.

Copy First Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying it:

  • sudo cp /etc/apache2/sites-available/example.com.conf /etc/apache2/sites-available/test.com.conf

Open the new file with root privileges in your editor:

  • sudo nano /etc/apache2/sites-available/test.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this:

/etc/apache2/sites-available/test.com.conf
<VirtualHost *:80>
    ServerAdmin admin@test.com
    ServerName test.com
    ServerAlias www.test.com
    DocumentRoot /var/www/test.com/public_html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Save and close the file when you are finished.

 

Step Five — Enable the New Virtual Host Files

Now that we have created our virtual host files, we must enable them. Apache includes some tools that allow us to do this.

We can use the a2ensite tool to enable each of our sites like this:

  • sudo a2ensite example.com.conf
  • sudo a2ensite test.com.conf

Next, disable the default site defined in 000-default.conf:

  • sudo a2dissite 000-default.conf

When you are finished, you need to restart Apache to make these changes take effect:

  • sudo systemctl restart apache2

In other documentation, you may also see an example using the service command:

  • sudo service apache2 restart

This command will still work, but it may not give the output you're used to seeing on other systems, since it's now a wrapper around systemd's systemctl.

 

Step Six — Set Up Local Hosts File (Optional)

If you haven't been using actual domain names that you own to test this procedure and have been using some example domains instead, you can at least test the functionality of this process by temporarily modifying the hosts file on your local computer.

This will intercept any requests for the domains that you configured and point them to your VPS server, just as the DNS system would do if you were using registered domains. This will only work from your computer though, and is simply useful for testing purposes.

Make sure you are operating on your local computer for these steps and not your VPS server. You will need to know the computer's administrative password or otherwise be a member of the administrative group.

If you are on a Mac or Linux computer, edit your local file with administrative privileges by typing:

  • sudo nano /etc/hosts

If you are on a Windows machine, you can find instructions on altering your hosts file here.

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

For the domains that I used in this guide, assuming that my VPS IP address is 111.111.111.111, I could add the following lines to the bottom of my hosts file:

/etc/hosts
127.0.0.1   localhost
127.0.1.1   guest-desktop
111.111.111.111 example.com
111.111.111.111 test.com

This will direct any requests for example.com and test.com on our computer and send them to our server at 111.111.111.111. This is what we want if we are not actually the owners of these domains in order to test our virtual hosts.

Save and close the file.

 

Step Seven — Test your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser:

http://example.com

You should see a page that looks like this:

Apache virt host example

Likewise, if you can visit your second page:

http://test.com

You will see the file you created for your second site:

Apache virt host test

If both of these sites work well, you've successfully configured two virtual hosts on the same server.

If you adjusted your home computer's hosts file, you may want to delete the lines you added now that you verified that your configuration works. This will prevent your hosts file from being filled with entries that are not actually necessary.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

 

Conclusion

If you followed along, you should now have a single server handling two separate domain names. You can expand this process by following the steps we outlined above to make additional virtual hosts.

There is no software limit on the number of domain names Apache can handle, so feel free to make as many as your server is capable of handling.

https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-16-04

Author: Angelo A Vitale
Last update: 2018-12-10 20:20


max file size - Locating php.ini to change max file size

From shell - php --ini
From Shell - sudo find / -iname '*php.ini*'
sudo nano /etc/php/7.0/apache2/php.ini

ou need to edit the following three settings in your php.ini file located at: /etc/php5/apache2/ Here are a set of instructions to follow line by line.

Type "sudo nano /etc/php/7.0/apache2/php.ini"
Press Ctrl and W and type "post_max_size"
Change the value to the number of Mb you want your site to accept as uploads
Press Ctrl and W and type "upload_max_filesize"
Change the value to the number of Mb you want your site to accept as uploads
Press Ctrl and W and type "max_execution_time"
Change the value to 600
Press Ctrl and O
Press Ctrl and X
Type sudo apachectl restart

https://docs.moodle.org/36/en/File_upload_size#Ubuntu_Linux_Instructions
https://wiki.gentoo.org/wiki/Nano/Basics_Guide
https://faq.phpmyfaq.de/content/23/19/en/can-i-change-the-size-of-the-attachments.html

Author: Angelo A Vitale
Last update: 2018-12-11 01:50


How To Change Your PHP Settings on Ubuntu 14.04

Introduction

PHP is a server side scripting language used by many popular CMS and blog platforms like WordPress and Drupal. It is also part of the popular LAMP and LEMP stacks. Updating the PHP configuration settings is a common task when setting up a PHP-based website. Locating the exact PHP configuration file may not be easy. There are multiple installations of PHP running normally on a server, and each one has its own configuration file. Knowing which file to edit and what the current settings are can be a bit of a mystery.

This guide will show how to view the current PHP configuration settings of your web server and how to make updates to the PHP settings.

 

Prerequisites

For this guide, you need the following:

There are many web server configurations with PHP, but here are two common methods:

This tutorial is applicable to these DigitalOcean One-click Apps as well:

Note: This tutorial assumes you are running Ubuntu 14.04. Editing the php.ini file should be the same on other systems, but the file locations might be different.

All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo.

 

Reviewing the PHP Configuration

You can review the live PHP configuration by placing a page with a phpinfo function along with your website files.

To create a file with this command, first change into the directory that contains your website files. For example, the default directory for webpage files for Apache on Ubuntu 14.04 is /var/www/html/:

  • cd /var/www/html

Then, create the info.php file:

  • sudo nano /var/www/html/info.php

Paste the following lines into this file and save it:

info.php
<?php
phpinfo();
?>

Note: Some DigitalOcean One-click Apps have an info.php file placed in the web root automatically.

When visiting the info.php file on your web server (http://www.example.com/info.php) you will see a page that displays details on the PHP environment, OS version, paths, and values of configuration settings. The file to the right of the Loaded Configuration File line shows the proper file to edit in order to update your PHP settings.

PHP Info Page

This page can be used to reveal the current settings your web server is using. For example, using the Findfunction of your web browser, you can search for the settings named post_max_size and upload_max_filesize to see the current settings that restrict file upload sizes.

Warning: Since the info.php file displays version details of the OS, Web Server, and PHP, this file should be removed when it is not needed to keep the server as secure as possible.

 

Modifying the PHP Configuration

The php.ini file can be edited to change the settings and configuration of how PHP functions. This section gives a few common examples.

Sometimes a PHP application might need to allow for larger upload files such as uploading themes and plugins on a WordPress site. To allow larger uploads for your PHP application, edit the php.ini file with the following command (Change the path and file to match your Loaded Configuration File. This example shows the path for Apache on Ubuntu 14.04.):

  • sudo nano /etc/php5/apache2/php.ini

The default lines that control the file size upload are:

php.ini
post_max_size = 8M
upload_max_filesize = 2M

Change these default values to your desired maximum file upload size. For example, if you needed to upload a 30MB file you would changes these lines to:

php.ini
post_max_size = 30M
upload_max_filesize = 30M

Other common resource settings include the amount of memory PHP can use as set by memory_limit:

php.ini
memory_limit = 128M

or max_execution_time, which defines how many seconds a PHP process can run for:

php.ini
max_execution_time = 30

When you have the php.ini file configured for your needs, save the changes, and exit the text editor.

  • sudo service apache2 restart

Refreshing the info.php page should now show your updated settings. Remember to remove the info.php when you are done changing your PHP configuration.

 

Conclusion

Many PHP-based applications require slight changes to the PHP configuration. By using the phpinfofunction, the exact PHP configuration file and settings are easy to find. Use the method described in this article to make these changes.

https://www.digitalocean.com/community/tutorials/how-to-change-your-php-settings-on-ubuntu-14-04

Author: Angelo A Vitale
Last update: 2018-12-11 01:53


How To Install and Secure phpMyAdmin on Ubuntu 16.04

Introduction

While many users need the functionality of a database management system like MySQL, they may not feel comfortable interacting with the system solely from the MySQL prompt.

phpMyAdmin was created so that users can interact with MySQL through a web interface. In this guide, we'll discuss how to install and secure phpMyAdmin so that you can safely use it to manage your databases from an Ubuntu 16.04 system.


Prerequisites

Before you get started with this guide, you need to have some basic steps completed.

First, we'll assume that you are using a non-root user with sudo privileges, as described in steps 1-4 in the initial server setup of Ubuntu 16.04.

We're also going to assume that you've completed a LAMP (Linux, Apache, MySQL, and PHP) installation on your Ubuntu 16.04 server. If this is not completed yet, you can follow this guide on installing a LAMP stack on Ubuntu 16.04.

Finally, there are important security considerations when using software like phpMyAdmin, since it:

  • Communicates directly with your MySQL installation
  • Handles authentication using MySQL credentials
  • Executes and returns results for arbitrary SQL queries

For these reasons, and because it is a widely-deployed PHP application which is frequently targeted for attack, you should never run phpMyAdmin on remote systems over a plain HTTP connection. If you do not have an existing domain configured with an SSL/TLS certificate, you can follow this guide on securing Apache with Let's Encrypt on Ubuntu 16.04.

Once you are finished with these steps, you're ready to get started with this guide.


Step One — Install phpMyAdmin

To get started, we will install phpMyAdmin from the default Ubuntu repositories.

We can do this by updating our local package index and then using the apt packaging system to pull down the files and install them on our system:

  • sudo apt-get update
  • sudo apt-get install phpmyadmin php-mbstring php-gettext

This will ask you a few questions in order to configure your installation correctly.

Warning: When the first prompt appears, apache2 is highlighted, but not selected. If you do not hit Space to select Apache, the installer will not move the necessary files during installation. Hit Space, Tab, and then Enter to select Apache.


  • For the server selection, choose apache2.
  • Select yes when asked whether to use dbconfig-common to set up the database
  • You will be prompted for your database administrator's password
  • You will then be asked to choose and confirm a password for the phpMyAdmin application itself

The installation process actually adds the phpMyAdmin Apache configuration file into the /etc/apache2/conf-enabled/ directory, where it is automatically read.

The only thing we need to do is explicitly enable the PHP mcrypt and mbstring extensions, which we can do by typing:

  • sudo phpenmod mcrypt
  • sudo phpenmod mbstring

Afterwards, you'll need to restart Apache for your changes to be recognized:

  • sudo systemctl restart apache2

You can now access the web interface by visiting your server's domain name or public IP address followed by /phpmyadmin:

https://domain_name_or_IP/phpmyadmin

phpMyAdmin login screen

You can now log into the interface using the root username and the administrative password you set up during the MySQL installation.

When you log in, you'll see the user interface, which will look something like this:

phpMyAdmin user interface


Step Two — Secure your phpMyAdmin Instance

We were able to get our phpMyAdmin interface up and running fairly easily. However, we are not done yet. Because of its ubiquity, phpMyAdmin is a popular target for attackers. We should take extra steps to prevent unauthorized access.

One of the easiest way of doing this is to place a gateway in front of the entire application. We can do this using Apache's built-in .htaccess authentication and authorization functionalities.

Configure Apache to Allow .htaccess Overrides

First, we need to enable the use of .htaccess file overrides by editing our Apache configuration file.

We will edit the linked file that has been placed in our Apache configuration directory:

  • sudo nano /etc/apache2/conf-available/phpmyadmin.conf

We need to add an AllowOverride All directive within the section of the configuration file, like this:

/etc/apache2/conf-available/phpmyadmin.conf


    Options FollowSymLinks
    DirectoryIndex index.php
    AllowOverride All
    . . .

When you have added this line, save and close the file.

To implement the changes you made, restart Apache:

  • sudo systemctl restart apache2

Create an .htaccess File

Now that we have enabled .htaccess use for our application, we need to create one to actually implement some security.

In order for this to be successful, the file must be created within the application directory. We can create the necessary file and open it in our text editor with root privileges by typing:

  • sudo nano /usr/share/phpmyadmin/.htaccess

Within this file, we need to enter the following information:

/usr/share/phpmyadmin/.htaccess

AuthType Basic
AuthName "Restricted Files"
AuthUserFile /etc/phpmyadmin/.htpasswd
Require valid-user

Let's go over what each of these lines mean:

  • AuthType Basic: This line specifies the authentication type that we are implementing. This type will implement password authentication using a password file.
  • AuthName: This sets the message for the authentication dialog box. You should keep this generic so that unauthorized users won't gain any information about what is being protected.
  • AuthUserFile: This sets the location of the password file that will be used for authentication. This should be outside of the directories that are being served. We will create this file shortly.
  • Require valid-user: This specifies that only authenticated users should be given access to this resource. This is what actually stops unauthorized users from entering.

When you are finished, save and close the file.

Create the .htpasswd file for Authentication

The location that we selected for our password file was "/etc/phpmyadmin/.htpasswd". We can now create this file and pass it an initial user with the htpasswd utility:

  • sudo htpasswd -c /etc/phpmyadmin/.htpasswd username

You will be prompted to select and confirm a password for the user you are creating. Afterwards, the file is created with the hashed password that you entered.

If you want to enter an additional user, you need to do so without the -c flag, like this:

  • sudo htpasswd /etc/phpmyadmin/.htpasswd additionaluser

Now, when you access your phpMyAdmin subdirectory, you will be prompted for the additional account name and password that you just configured:

https://domain_name_or_IP/phpmyadmin

phpMyAdmin apache password

After entering the Apache authentication, you'll be taken to the regular phpMyAdmin authentication page to enter your other credentials. This will add an additional layer of security since phpMyAdmin has suffered from vulnerabilities in the past.


Conclusion

You should now have phpMyAdmin configured and ready to use on your Ubuntu 16.04 server. Using this interface, you can easily create databases, users, tables, etc., and perform the usual operations like deleting and modifying structures and data.

************************In addition had to add this at the command line 

I believe this is because You haven't configure your phpmyadmin with the apache server well. If you installed the apache server and phpmyadmin using sudo apt-get install (Because you can install them using source code etc.) below procedure may works for you.

sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf-available/phpmyadmin.conf

sudo ln -s /usr/share/phpmyadmin /var/www/html/phpmyadmin

sudo service apache2 restart

Author: Angelo A Vitale
Last update: 2018-12-11 11:49


How To Install and Configure OpenLDAP and phpLDAPadmin on an Ubuntu 14.04 Server

Introduction

LDAP, or Lightweight Directory Access Protocol, is a protocol designed to manage and access related information in a centralized, hierarchical file and directory structure.

In some ways, it operates similarly to a relational database, but this does not hold true for everything. The hierarchical structure is the main difference in how the data is related. It can be used to store any kind of information and it is often used as one component of a centralized authentication system.

In this guide, we will discuss how to install and configure an OpenLDAP server on an Ubuntu 14.04 server. We will then install and secure a phpLDAPadmin interface to provide an easy web interface.

Install LDAP and Helper Utilities

Before we begin, we must install the necessary software. Luckily, the packages are all available in Ubuntu's default repositories.

This is our first time using apt in this session, so we'll refresh our local package index. Afterwards we can install the packages we want:

sudo apt-get update
sudo apt-get install slapd ldap-utils

During the installation, you will be asked to select and confirm an administrator password for LDAP. You can actually put anything here because you'll have the opportunity to change it in just a moment.

Reconfigure slapd to Select Better Settings

Even though the package was just installed, we're going to go right ahead and reconfigure the defaults that Ubuntu installs with.

The reason for this is that while the package has the ability to ask a lot of important configuration questions, these are skipped over in the installation process. We can gain access to all of the prompts though by telling our system to reconfigure the package:

sudo dpkg-reconfigure slapd

There are quite a few new questions that will be asked as you go through this process. Let's go over these now:

  • Omit OpenLDAP server configuration? No
  • DNS domain name?
    • This option will determine the base structure of your directory path. Read the message to understand exactly how this will be implemented.
    • This is actually a rather open option. You can select whatever "domain name" value you'd like, even if you don't own the actual domain. However, if you have a domain name for the server, it's probably wise to use that.
    • For this guide, we're going to select test.com for our configuration.
  • Organization name?
    • This is, again, pretty much entirely up to your preferences.
    • For this guide, we will be using example as the name of our organization.
  • Administrator password?
    • As I mentioned in the installation section, this is your real opportunity to select an administrator password. Anything you select here will overwrite the previous password you used.
  • Database backend? HDB
  • Remove the database when slapd is purged? No
  • Move old database? Yes
  • Allow LDAPv2 protocol? No

At this point, your LDAP should be configured in a fairly reasonable way.

 
Install phpLDAPadmin to Manage LDAP with a Web Interface

Although it is very possible to administer LDAP through the command line, most users will find it easier to use a web interface. We're going to install phpLDAPadmin, which provides this functionality, to help remove some of the friction of learning the LDAP tools.

The Ubuntu repositories contain the phpLDAPadmin package. You can install it by typing:

sudo apt-get install phpldapadmin

This should install the administration interface, enable the necessary Apache virtual hosts files, and reload Apache.

The web server is now configured to serve your application, but we will make some additional changes. We need to configure phpLDAPadmin to use the domain schema we configured for LDAP, and we are also going to make some adjustments to secure our configuration a little bit.

Configure phpLDAPadmin

Now that the package is installed, we need to configure a few things so that it can connect with the LDAP directory structure that was created during the OpenLDAP configuration stage.

Begin by opening the main configuration file with root privileges in your text editor:

sudo nano /etc/phpldapadmin/config.php

In this file, we need to add the configuration details that we set up for our LDAP server. Start by looking for the host parameter and setting it to your server's domain name or public IP address. This parameter should reflect the way you plan on accessing the web interface:

$servers->setValue('server','host','server_domain_name_or_IP');

Next up, you'll need to configure the domain name you selected for your LDAP server. Remember, in our example we selected test.com. We need to translate this into LDAP syntax by replacing each domain component (everything not a dot) into the value of a dc specification.

All this means is that instead of writing test.com, we will write something like dc=test,dc=com. We should find the parameter that sets the server base parameter and use the format we just discussed to reference the domain we decided on:

$servers->setValue('server','base',array('dc=test,dc=com'));

We need to adjust this same thing in our login bind_id parameter. The cn parameter is already set as "admin". This is correct. We just need to adjust the dc portions again, just as we did above:

$servers->setValue('login','bind_id','cn=admin,dc=test,dc=com');

The last thing that we need to adjust is a setting that control the visibility of warning messages. By default phpLDAPadmin will throw quite a few annoying warning messages in its web interface about the template files that have no impact on the functionality.

We can hide these by searching for the hide_template_warning parameter, uncommenting the line that contains it, and setting it to "true":

$config->custom->appearance['hide_template_warning'] = true;

This is the last thing that we need to adjust. You can save and close the file when you are finished.

 

Create an SSL Certificate

We want to secure our connection to the LDAP server with SSL so that outside parties cannot intercept our communications.

Since the admin interface is talking to the LDAP server itself on the local network, we do not need to use SSL for that connection. We just need to secure the external connection to our browser when we connect.

To do this, we just need to set up a self-signed SSL certificate that our server can use. This will not help us validate the identity of the server, but it will allow us to encrypt our messages.

The OpenSSL packages should be installed on your system by default. First, we should create a directory to hold our certificate and key:

sudo mkdir /etc/apache2/ssl

Next, we can create the key and certificate in one movement by typing:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt

You will have to answer some questions in order for the utility to fill out the fields in the certificate correctly. The only one that really matters is the prompt that says Common Name (e.g. server FQDN or YOUR name). Enter your server's domain name or IP address.

When you are finished, your certificate and key will be written to the /etc/apache2/ssl directory.

 

Create a Password Authentication File

We also want to password protect our phpLDAPadmin location. Even though phpLDAPadmin has password authentication, this will provide an extra level of protection.

The utility that we need is contained in an Apache utility package. Get it by typing:

sudo apt-get install apache2-utils

Now that you have the utility available, you can create a password file that will contain a username that you choose and the associated hashed password.

We will keep this in the /etc/apache2 directory. Create the file and specify the username you want to use by typing:

sudo htpasswd -c /etc/apache2/htpasswd demo_user

Now, we are ready to modify Apache to take advantage of our security upgrades.

 

Secure Apache

The first thing we should do is enable the SSL module in Apache. We can do this by typing:

sudo a2enmod ssl

This will enable the module, allowing us to use it. We still need to configure Apache to take advantage of this though.

Currently, Apache is reading a file called 000-default.conf for regular, unencrypted HTTP connections. We need to tell it to redirect requests for our phpLDAPadmin interface to our HTTPS interface so that the connection is encrypted.

When we redirect traffic to use our SSL certificates, we'll also implement the password file to authenticate users. While we're modifying things, we'll also change the location of the phpLDAPadmin interface itself to minimize targeted attacks.

Modify the phpLDAPadmin Apache Configuration

The first thing we will do is modify the alias that is set up to serve our phpLDAPadmin files.

Open the file with root privileges in your text editor:

sudo nano /etc/phpldapadmin/apache.conf

This is the place where we need to decide on the URL location where we want to access our interface. The default is /phpldapadmin, but we want to change this to cut down on random login attempts by bots and malicious parties.

For this guide, we're going to use the location /superldap, but you should choose your own value.

We need to modify the line that specifies the Alias. This should be in an IfModule mod_alias.c block. When you are finished, it should look like this:

<IfModule mod_alias.c>
    Alias /superldap /usr/share/phpldapadmin/htdocs
</IfModule>

When you are finished, safe and close the file.

Configure the HTTP Virtual Host

Next, we need to modify our current Virtual Hosts file. Open it with root privileges in your editor:

sudo nano /etc/apache2/sites-enabled/000-default.conf

Inside, you'll see a rather bare configuration file that looks like this:

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

We want to add information about our domain name or IP address to define our server name and we want to set up our redirect to point all HTTP requests to the HTTPS interface. This will match the alias we configured in the last section.

The changes we discussed will end up looking like this. Modify the items in red with your own values:

<VirtualHost *:80>
    ServerAdmin webmaster@server_domain_or_IP
    DocumentRoot /var/www/html
    ServerName server_domain_or_IP
    Redirect permanent /superldap https://server_domain_or_IP/superldap
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Save and close the file when you are finished.

Configure the HTTPS Virtual Host File

Apache includes a default SSL Virtual Host file. However, it is not enabled by default.

We can enable it by typing:

sudo a2ensite default-ssl.conf

This will link the file from the sites-available directory into the sites-enabled directory. We can edit this file now by typing:

sudo nano /etc/apache2/sites-enabled/default-ssl.conf

This file is a bit more involved than the last one, so we will only discuss the changes that we have to make. All of the changes below should go within the Virtual Host block in the file.

First of all, set the ServerName value to your server's domain name or IP address again and change the ServerAdmin directive as well:

ServerAdmin webmaster@server_domain_or_IP
ServerName server_domain_or_IP

Next, we need to set the SSL certificate directives to point to the key and certificate that we created. The directives should already exist in your file, so just modify the files they point to:

SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/apache.key

The last thing we need to do is set up the location block that will implement our password protection for the entire phpLDAPadmin installation.

We do this by referencing the location where we are serving the phpLDAPadmin and setting up authentication using the file we generated. We will require anyone attempting to access this content to authenticate as a valid user:

<Location /superldap>
    AuthType Basic
    AuthName "Restricted Files"
    AuthUserFile /etc/apache2/htpasswd
    Require valid-user
</Location>

Save and close the file when you are finished.

Restart Apache to implement all of the changes that we have made:

sudo service apache2 restart

We can now move on to the actual interface.

 

Log into the phpLDAPadmin Web Interface

We have made the configuration changes we need to the phpLDAPadmin software. We can now begin to use it.

We can access the web interface by visiting our server's domain name or public IP address followed by the alias we configured. In our case, this was /superldap:

http://server_domain_name_or_IP/superldap

The first time you visit, you will probably see a warning about the site's SSL certificate:

phpLDAPadmin SSL warning

The warning is just here to let you know that the browser does not recognize the certificate authority that signed your certificate. Since we signed our own certificate, this is expected and not a problem.

Click the "Proceed anyway" button or whatever similar option your browser gives you.

Next, you will see the password prompt that you configured for Apache:

phpLDAPadmin password prompt

Fill in the account credentials you created with the htpasswd command. You will see the main phpLDAPadmin landing page:

phpLDAPadmin landing page

Click on the "login" link that you can see on the left-hand side of the page.

phpLDAPadmin login page

You will be taken to a login prompt. The login "DN" is like the username that you will be using. It contains the account name under "cn" and the domain name you selected for the server broken into "dc" sections as we described above.

It should be pre-populated with the correct value for the admin account if you configured phpLDAPadmin correctly. In our case, this looks like this:

cn=admin,dc=test,dc=com

For the password, enter the administrator password that you configured during the LDAP configuration.

You will be taken to the main interface:

phpLDAPadmin main page

 

Add Organizational Units, Groups, and Users

At this point, you are logged into the phpLDAPadmin interface. You have the ability to add users, organizational units, groups, and relationships.

LDAP is flexible in how you wish to structure your data and directory hierarchies. You can basically create whatever kind of structure you'd like and create rules for how they interact.

Since this process is the same on Ubuntu 14.04 as it was on Ubuntu 12.04, you can follow the steps laid out in the "Add Organizational Units, Groups, and Users" section of the LDAP installation article for Ubuntu 12.04.

The steps will be entirely the same on this installation, so follow along to get some practice working with the interface and learn about how to structure your units.

 

Conclusion

You should now have OpenLDAP installed and configured on your Ubuntu 14.04 server. You have also installed and configured a web interface to manage your structure through the phpLDAPadmin program. You have configured some basic security for the application by forcing SSL and password protecting the entire application.

The system that we have set up is quite flexible and you should be able to design your own organizational schema and manage groups of resources as your needs demand. In the next guide, we'll discuss how to configure your networked machines to use this LDAP server for system authentication.


https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-openldap-and-phpldapadmin-on-an-ubuntu-14-04-server

Author: Angelo A Vitale
Last update: 2018-12-11 20:02


How to Install OpenLDAP Server on Ubuntu 16.04

What is OpenLDAP

OpenLDAP is an open-source and fast directory server that provides network client with directory services. Client applications connect to OpenLDAP server using the Lightweight Directory Access Protocol (LDAP) to access organizational information stored on that server.  Given the appropriate access, clients can search the directory, modify and manipulate records in the directory. OpenLDAP is efficient on both reading and modifying data in the directory.

OpenLDAP servers are most commonly used to provide centralized management of user accounts.

How to Install OpenLDAP Server on Ubuntu 16.04

Run the following command to install OpenLDAP server and the client command-line utilities from Ubuntu 16.04 package repository. slapd stands for the Stand-Alone LDAP Daemon.

sudo apt install slapd ldap-utils

You will be asked to set a password for the admin entry in the LDAP directory.

 

Once it’s done, slapd will be automatically started. You can check out its status with:

systemctl status slapd

Be default, it runs as the openldap user as is defined in /etc/default/slapd file.

Basic Post-Installation Configuration

The installation process installs the package without any configurations. To have our OpenLDAP server running properly, we need to do some basic post-installation configuration. Run the following command to start the configuration wizard.

sudo dpkg-reconfigure slapd

You will need to answer a series of questions. Answer these questions as follows:

Omit LDAP server configuration: NO.

openldap ubuntu

DNS domain name: Enter your domain name like linuxbabe.com. You will need to set a correct A record for your domain name. You can also subdomains like directory.linuxbabe.com. This information is used to create the base DN (distinguished name) of the LDAP directory.

install openldap ubuntu

Organization name: Enter your organization name like LinuxBabe.

ldap server configuration in ubuntu 16.04 step by step

Administrator password: Enter the same password set during installation.

openldap server ubuntu 16.04

Database backend: MDB.

BDB (Berkeley Database) is slow and cumbersome. It is deprecated and support will be dropped in future OpenLDAP releases. HDB (Hierarchical Database) is a variant of the BDB backend and will also be deprecated.

MDB reads are 5-20x faster than BDB. Writes are 2-5x faster. And it consumes 1/4 as much RAM as BDB. So we choose MDB as the database backend.

openldap mdb

Do you want the database to be removed when slapd is purged? No.

install openldap server on ubuntu 16.04 LTS

Move old database? Yes.

openldap server configuration

Allow LDAPv2 protocol? No. The latest version of LDAP is LDAP v.3, developed in 1997. LDAPv2 is obsolete.

install ldap ubuntu

Now the process will reconfigure the OpenLDAP service according to your answers. Your OpenLDAP server is now ready to use.

openldap ubuntu 16.04 configuration

Configuring the LDAP Clients

/etc/ldap/ldap.conf is the configuration file for all OpenLDAP clients. Open this file.

sudo nano /etc/ldap/ldap.conf

We need to specify two parameters: the base DN and the URI of our OpenLDAP server. Copy and paste the following text at the end of the file. Replace your-domain and com as appropriate.

BASE     dc=your-domain,dc=com
URI      ldap://localhost

The first line defines the base DN. It tells the client programs where to start their search in the directory. If you used a subdomain when configuring OpenLDAP server, then you need to add the subdomain here like so

 
BASE      dc=subdomain,dc=your-domain,dc=com

The second line defines the URI of our OpenLDAP server. Since the LDAP server and client are on the same machine, we should set the URI to ldap://localhost.

Testing OpenLDAP Server

Now that OpenLDAP server is running and client configuration is done, run the following command to make test connections to the server.

ldapsearch -x

Output:

# extended LDIF
#
# LDAPv3
# base <dc=linuxbabe,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# linuxbabe.com
dn: dc=linuxbabe,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
o: LinuxBabe

# admin, linuxbabe.com
dn: cn=admin,dc=linuxbabe,dc=com
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

Result: 0 Success indicates that OpenLDAP server is working. If you get the following line, then it’s not working.

result: 32 No such object

Installing phpLDAPadmin

phpLDAPadmin is a web-based program for managing OpenLDAP server. The command-line utilities can be used to manage our OpenLDAP server, but for those who want an easy-to-use interface, you can install phpLDAPadmin.

Run the following command to install phpLDAPadmin from Ubuntu package repository.

sudo apt install phpldapadmin

If your Ubuntu server doesn’t have a web server running, then the above command will install the Apache web server as a dependency. If there’s already a web server such as Nginx, then Apache won’t be installed.

If you use Apache

The installation will put a configuration file phpldapadmin.conf under /etc/apache2/conf-enabled/ directory. Once the installation is done, you can access phpLDAPadmin web interface at

your-server-ip/phpldapadmin

or

your-domain.com/phpldapadmin

To enable HTTPS, you can obtain and install a free TLS certificate issued from Let’s Encrypt.

If you use Nginx

Nginx users will need to manually create a server block file for phpLDAPadmin.

sudo nano /etc/nginx/conf.d/phpldapadmin.conf

Copy the following text and paste it to the file. Replace ldap.your-domain.com with your preferred domain name.

server {
        listen 80;
        server_name ldap.your-domain.com;

        root /usr/share/phpldapadmin/htdocs;
        index index.php index.html index.htm;

        error_log /var/log/nginx/phpldapadmin.error;
        access_log /var/log/nginx/phpldapadmin.access;

        location ~ \.php$ {
            fastcgi_pass unix:/run/php/php7.0-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME  $document_root/$fastcgi_script_name;
            include fastcgi_params;
        }
}

Save and close the file. Then text Nginx configurations.

sudo nginx -t

If the test is successful, reload Nginx for the changes to take effect.

sudo systemctl reload nginx

Now you can access phpLDAPadmin web interface at ldap.your-domain.com. To enable HTTPS, you can obtain and install a free TLS certificate issued from Let’s Encrypt.

Configuring phpLDAPadmin

We need to do some configurations just like we did with the command-line client. The phpLDAPadmin configuration file is at /etc/phpldapadmin/config.php .

sudo nano /etc/phpldapadmin/config.php

Since OpenLDAP and phpLDAPadmin are running on the same machine, so we will configure phpLDAPadmin to connect to localhost on the default LDAP port 389 without SSL/TLS encryption.

Line 293 specifies that phpLDAPadmin will connect to localhost.

$servers->setValue('server','host','127.0.0.1');

Line 296 is commented out by default, which means the standard port 389 will be used.

// $servers->setValue('server','port',389);

Line 335 is commented out by default, which means TLS encryption is not enabled.

// $servers->setValue('server','tls',false);

Then go to line 300.

$servers->setValue('server','base',array('dc=example,dc=com'));

Change it to:

$servers->setValue('server','base',array());

This will let phpLDAPadmin automatically detect the base DN of your OpenLDAP server. Next, you can disable anonymous login. Go to line 453.

// $servers->setValue('login','anon_bind',true);

By default, anonymous login is enabled. To disable it, you need to remove the comment character (the two slashes) and change true to false.

$servers->setValue('login','anon_bind',false);

You will probably want to disable template warnings because these warnings are annoying and unimportant. Go to line 161.

// $config->custom->appearance['hide_template_warning'] = false;

Remove the comment character and change false to true.

$config->custom->appearance['hide_template_warning'] = true;

Save and close the file.

Accessing phpLDAPadmin Web Interface

We can now test out the phpLDAPadmin tool with our web browser. When phpLDAPadmin first loads, it looks something like this.

phpldapadmin

To log into our OpenLDAP server, click on the login link. You will see the login dialog box. The default login DN is cn=admin,dc=example,dc=com. You may need to change dc=example. In my case, I need to change the login DN to cn=admin,dc=linuxbabe,dc=com.

openldap web interface

The password is the admin password you set during the configuration of OpenLDAP server. Once you log into phpLDAPadmin, you can manage this directory server.

phpldapadmin configuration

That’s it! I hope this tutorial helped you install and configure both OpenLDAP server and phpLDAPadmin on Ubuntu 16.04. In the next tutorial, we will see how to configure Ubuntu to authenticate user logins with OpenLDAP.


https://www.linuxbabe.com/ubuntu/install-configure-openldap-server-ubuntu-16-04

Author: Angelo A Vitale
Last update: 2018-12-11 20:03


OpenLDAP and phpLDAPadmin Address Book

OpenLDAP configuration with phpLDAPadmin

This section of the user guide will walk you through creating a simple address book, and adding an entry to it. This address book can be shared with your users. The most common set up is the creation of a company or organization address book that all the users can access through their e-mail client.

This is just a simple example of what can be done with OpenLDAP and phpLDAPadmin. For more complex examples, please refer to the official OpenLDAP documentation.

Connecting to phpLDAPadmin

To connect to phpLDAPadmin, browse to http://eapps-example.com/ldapadmin (substitute your own domain name for eapps-example.com).

This takes you to the phpLDAPadmin main screen, where you can log in.

phpLDAPadmin main screen


Click on login in the left navigation pane to log in. This takes you to the Authenticate to server My LDAP Server screen.

Authenticate to server
  • Login DN - cn=Manager,dc=my-domain,dc=com (use this exact string)

  • Password - the password for phpLDAPadmin is hostname of your Virtual Server. To find the hostname from ISPmanager, go to Server Settings > Server parameters. The Server name is the hostname of your Virtual Server, and your phpLDAPadmin password.

Once you have entered your login information, click on Authenticate. This takes you to the main phpLDAPadmin screen.

phpLDAPadmin logged in

Creating a simple address book

Once you have logged in, you can now create a simple address book that can be shared with other users. For example, this address book could be used as a company directory that listed all the contact information for your employees.

In the My LDAP Server section of the main phpLDAPadmin screen, click on Import.

Import


This opens the Import screen.

Import screen


Copy and paste the following text into the Or paste your LDIF here section of the screen:

dn: ou=people, dc=my-domain, dc=com
objectClass: top
objectClass: organizationalUnit
ou: people


The screen will now look like this:

LDIF file

Once you have pasted in the text, click Proceed >>.


If the import is successful, you will see this message: Adding ou=people,dc=my-domain,dc=com Success

LDIF import success

Adding address book entries

In the left navigation pane, under My LDAP Server, click the [+] (plus sign) to the left of dc=my-domain,dc=com (2). This will expand the listing.

The LDIF file you just imported creates an entry (called an "Organizational Unit" or "ou" in OpenLDAP) called ou=people. Click on people to create an entry in the address book.

people


The first time you click on ou=people, you will see these errors. They can be ignored, and should only appear once:

people errors


In the Select a template to edit the entry screen, select Generic: Address Book Entry

Generic: Address Book Entry


In the next screen, select Create a child entry

Create a child entry


In the next screen - Select a template for the creation process, select Generic: Address Book Entry.

Generic: Address Book Entry


This takes you to the New Address Book Entry (Step 1 of 1) screen. This is the default screen:

New Address Book Entry (Step 1 of 1) default


This is the screen with information filled in. All that is actually needed to create the entry is Last name (which will populate Common Name). If you are creating a shared address book, then you would also want to include the e-mail address and any other contact information as needed.

New Address Book Entry (Step 1 of 1)

Once you have entered the information, click on Create Object.


This takes you to the Create LDAP Entry screen. This is where you can review the information you just entered.

Create LDAP Entry

If everything is correct, click Commit. If anything is incorrect, you will have an opportunity to update in the next screen.

After you click on Commit, you should see this message:

Creation successful

Also you will see a way to update any information in the entry just below this message.


Now you can click on the [+] that is next to ou=people and see the new entry for Test User.

Test User

To add more users, simply go through the Adding address book entries process for each user.


https://support.eapps.com/index.php?/Knowledgebase/Article/View/437/55/user-guide---openldap-and-phpldapadmin

Author: Angelo A Vitale
Last update: 2018-12-11 20:38


How To Set Up vsftpd for a User's Directory on Ubuntu 16.04

Introduction

FTP, short for File Transfer Protocol, is a network protocol that was once widely used for moving files between a client and server. It has since been replaced by faster, more secure, and more convenient ways of delivering files. Many casual Internet users expect to download directly from their web browser with https, and command-line users are more likely to use secure protocols such as the scp or sFTP.

FTP is still used to support legacy applications and workflows with very specific needs. If you have a choice of what protocol to use, consider exploring the more modern options. When you do need FTP, however, vsftpd is an excellent choice. Optimized for security, performance, and stability, vsftpd offers strong protection against many security problems found in other FTP servers and is the default for many Linux distributions.

In this tutorial, we'll show you how to configure vsftpd to allow a user to upload files to his or her home directory using FTP with login credentials secured by SSL/TLS.

 

Prerequisites

To follow along with this tutorial you will need:

  • An Ubuntu 16.04 server with a non-root user with sudo privileges: You can learn more about how to set up a user with these privileges in our Initial Server Setup with Ubuntu 16.04 guide.

Once you have an Ubuntu server in place, you're ready to begin.

 

Step 1 — Installing vsftpd

We'll start by updating our package list and installing the vsftpd daemon:

  • sudo apt-get update
  • sudo apt-get install vsftpd

When the installation is complete, we'll copy the configuration file so we can start with a blank configuration, saving the original as a backup.

  • sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.orig

With a backup of the configuration in place, we're ready to configure the firewall.

 

Step 2 — Opening the Firewall

We'll check the firewall status to see if it’s enabled. If so, we’ll ensure that FTP traffic is permitted so you won’t run into firewall rules blocking you when it comes time to test.

  • sudo ufw status

In this case, only SSH is allowed through:

Output
Status: active

To Action  From
-- ------  ----
OpenSSH ALLOW   Anywhere
OpenSSH (v6)   ALLOW   Anywhere (v6)

You may have other rules in place or no firewall rules at all. Since only ssh traffic is permitted in this case, we’ll need to add rules for FTP traffic.

We'll need to open ports 20 and 21 for FTP, port 990 for later when we enable TLS, and ports 40000-50000 for the range of passive ports we plan to set in the configuration file:

  • sudo ufw allow 20/tcp
  • sudo ufw allow 21/tcp
  • sudo ufw allow 990/tcp
  • sudo ufw allow 40000:50000/tcp
  • sudo ufw status

Now our firewall rules looks like:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
990/tcp                    ALLOW       Anywhere
20/tcp                     ALLOW       Anywhere
21/tcp                     ALLOW       Anywhere
40000:50000/tcp            ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
20/tcp (v6)                ALLOW       Anywhere (v6)
21/tcp (v6)                ALLOW       Anywhere (v6)
990/tcp (v6)               ALLOW       Anywhere (v6)
40000:50000/tcp (v6)       ALLOW       Anywhere (v6)

With vsftpd installed and the necessary ports open, we're ready to proceed to the next step.

 

Step 3 — Preparing the User Directory

For this tutorial, we're going to create a user, but you may already have a user in need of FTP access. We'll take care to preserve an existing user’s access to their data in the instructions that follow. Even so, we recommend you start with a new user until you've configured and tested your setup.

First, we’ll add a test user:

  • sudo adduser sammy

Assign a password when prompted and feel free to press "ENTER" through the other prompts.

FTP is generally more secure when users are restricted to a specific directory.vsftpd accomplishes this with chroot jails. When chroot is enabled for local users, they are restricted to their home directory by default. However, because of the way vsftpd secures the directory, it must not be writable by the user. This is fine for a new user who should only connect via FTP, but an existing user may need to write to their home folder if they also shell access.

In this example, rather than removing write privileges from the home directory, we're will create an ftpdirectory to serve as the chroot and a writable files directory to hold the actual files.

Create the ftp folder, set its ownership, and be sure to remove write permissions with the following commands:

  • sudo mkdir /home/sammy/ftp
  • sudo chown nobody:nogroup /home/sammy/ftp
  • sudo chmod a-w /home/sammy/ftp

Let's verify the permissions:

  • sudo ls -la /home/sammy/ftp
Output
total 8
4 dr-xr-xr-x  2 nobody nogroup 4096 Aug 24 21:29 .
4 drwxr-xr-x 3 sammy  sammy   4096 Aug 24 21:29 ..

Next, we'll create the directory where files can be uploaded and assign ownership to the user:

  • sudo mkdir /home/sammy/ftp/files
  • sudo chown sammy:sammy /home/sammy/ftp/files

A permissions check on the files directory should return the following:

  • sudo ls -la /home/sammy/ftp
Output
total 12
dr-xr-xr-x 3 nobody nogroup 4096 Aug 26 14:01 .
drwxr-xr-x 3 sammy  sammy   4096 Aug 26 13:59 ..
drwxr-xr-x 2 sammy  sammy   4096 Aug 26 14:01 files

Finally, we'll add a test.txt file to use when we test later on:

  • echo "vsftpd test file" | sudo tee /home/sammy/ftp/files/test.txt

Now that we've secured the ftp directory and allowed the user access to the files directory, we'll turn our attention to configuration.

 

Step 4 — Configuring FTP Access

We're planning to allow a single user with a local shell account to connect with FTP. The two key settings for this are already set in vsftpd.conf. Start by opening the config file to verify that the settings in your configuration match those below:

  • sudo nano /etc/vsftpd.conf
/etc/vsftpd.conf
. . .
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
. . .

Next we'll need to change some values in the file. In order to allow the user to upload files, we’ll uncomment the write_enable setting so that we have:

/etc/vsftpd.conf
. . .
write_enable=YES
. . .

We’ll also uncomment the chroot to prevent the FTP-connected user from accessing any files or commands outside the directory tree.

/etc/vsftpd.conf
. . .
chroot_local_user=YES
. . .

We’ll add a user_sub_token in order to insert the username in our local_root directory path so our configuration will work for this user and any future users that might be added.

/etc/vsftpd.conf
user_sub_token=$USER
local_root=/home/$USER/ftp

We'll limit the range of ports that can be used for passive FTP to make sure enough connections are available:

/etc/vsftpd.conf
pasv_min_port=40000
pasv_max_port=50000

Note: We pre-opened the ports that we set here for the passive port range. If you change the values, be sure to update your firewall settings.

Since we’re only planning to allow FTP access on a case-by-case basis, we’ll set up the configuration so that access is given to a user only when they are explicitly added to a list rather than by default:

/etc/vsftpd.conf
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

userlist_deny toggles the logic. When it is set to "YES", users on the list are denied FTP access. When it is set to "NO", only users on the list are allowed access. When you're done making the change, save and exit the file.

Finally, we’ll create and add our user to the file. We'll use the -a flag to append to file:

  • echo "sammy" | sudo tee -a /etc/vsftpd.userlist

Double-check that it was added as you expected:

cat /etc/vsftpd.userlist
Output
sammy

Restart the daemon to load the configuration changes:

  • sudo systemctl restart vsftpd

Now we're ready for testing.

 

Step 5 — Testing FTP Access

We've configured the server to allow only the user sammy to connect via FTP. Let's make sure that's the case.

Anonymous users should fail to connect: We disabled anonymous access. Here we'll test that by trying to connect anonymously. If we've done it properly, anonymous users should be denied permission:

  • ftp -p 203.0.113.0
Output
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): anonymous
530 Permission denied.
ftp: Login failed.
ftp>

Close the connection:

  • bye

Users other than sammy should fail to connect: Next, we'll try connecting as our sudo user. They, too, should be denied access, and it should happen before they're allowed to enter their password.

  • ftp -p 203.0.113.0
Output
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sudo_user
530 Permission denied.
ftp: Login failed.
ftp>

Close the connection:

  • bye

sammy should be able to connect, as well as read and write files: Here, we'll make sure that our designated user canconnect:

  • ftp -p 203.0.113.0
Output
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
331 Please specify the password.
Password: your_user's_password
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>

We'll change into the files directory, then use the get command to transfer the test file we created earlier to our local machine:

  • cd files
  • get test.txt
Output
227 Entering Passive Mode (203,0,113,0,169,12).
150 Opening BINARY mode data connection for test.txt (16 bytes).
226 Transfer complete.
16 bytes received in 0.0101 seconds (1588 bytes/s)
ftp>

We'll turn right back around and try to upload the file with a new name to test write permissions:

  • put test.txt upload.txt
Output
227 Entering Passive Mode (203,0,113,0,164,71).
150 Ok to send data.
226 Transfer complete.
16 bytes sent in 0.000894 seconds (17897 bytes/s)

Close the connection:

  • bye

Now that we've tested our configuration, we'll take steps to further secure our server.

 

Step 6 — Securing Transactions

Since FTP does not encrypt any data in transit, including user credentials, we'll enable TTL/SSL to provide that encryption. The first step is to create the SSL certificates for use with vsftpd.

We'll use openssl to create a new certificate and use the -days flag to make it valid for one year. In the same command, we'll add a private 2048-bit RSA key. Then by setting both the -keyout and -out flags to the same value, the private key and the certificate will be located in the same file.

We'll do this with the following command:

  • sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem

You'll be prompted to provide address information for your certificate. Substitute your own information for the questions below:

Output
Generating a 2048 bit RSA private key
............................................................................+++
...........+++
writing new private key to '/etc/ssl/private/vsftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:NY
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DigitalOcean
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: your_IP_address
Email Address []:

For more detailed information about the certificate flags, see OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs

Once you've created the certificates, open the vsftpd configuration file again:

  • sudo nano /etc/vsftpd.conf

Toward the bottom of the file, you should two lines that begin with rsa_. Comment them out so they look like:

/etc/vsftpd.conf
# rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
# rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key

Below them, add the following lines which point to the certificate and private key we just created:

/etc/vsftpd.conf
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem

After that, we will force the use of SSL, which will prevent clients that can't deal with TLS from connecting. This is necessary in order to ensure all traffic is encrypted but may force your FTP user to change clients. Change ssl_enable to YES:

/etc/vsftpd.conf
ssl_enable=YES

After that, add the following lines to explicitly deny anonymous connections over SSL and to require SSL for both data transfer and logins:

/etc/vsftpd.conf
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES

After this we'll configure the server to use TLS, the preferred successor to SSL by adding the following lines:

/etc/vsftpd.conf
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO

Finally, we will add two more options. First, we will not require SSL reuse because it can break many FTP clients. We will require "high" encryption cipher suites, which currently means key lengths equal to or greater than 128 bits:

/etc/vsftpd.conf
require_ssl_reuse=NO
ssl_ciphers=HIGH

When you're done, save and close the file.

Now, we need to restart the server for the changes to take effect:

  • sudo systemctl restart vsftpd

At this point, we will no longer be able to connect with an insecure command-line client. If we tried, we'd see something like:

  • ftp -p 203.0.113.0
  • Connected to 203.0.113.0.
  • 220 (vsFTPd 3.0.3)
  • Name (203.0.113.0:default): sammy
  • 530 Non-anonymous sessions must use encryption.
  • ftp: Login failed.
  • 421 Service not available, remote server has closed connection
  • ftp>

Next, we'll verify that we can connect using a client that supports TLS.

 

Step 7 — Testing TLS with FileZilla

Most modern FTP clients can be configured to use TLS encryption. We will demonstrate how to connect using FileZilla because of its cross platform support. Consult the documentation for other clients.

When you first open FileZilla, find the Site Manager icon just below the word File, the left-most icon on the top row. Click it:

Site Manager Screent Shot

A new window will open. Click the "New Site" button in the bottom right corner:

New Site Button
Under "My Sites" a new icon with the words "New site" will appear. You can name it now or return later and use the Rename button.

You must fill out the "Host" field with the name or IP address. Under the "Encryption" drop down menu, select "Require explicit FTP over TLS".

For "Logon Type", select "Ask for password". Fill in the FTP user you created in the "User" field:

General Settings Tab
Click "Connect" at the bottom of the interface. You will be asked for the user's password:

Password Dialogue
Click "OK" to connect. You should now be connected with your server with TLS/SSL encryption.

Site Certificate Dialogue
When you’ve accepted the certificate, double-click the files folder and drag upload.txt to the left to confirm that you’re able to download files.

Download test.txt
When you’ve done that, right-click on the local copy, rename it to upload-tls.txt´ and drag it back to the server to confirm that you can upload files.

Rename and Upload
You’ve now confirmed that you can securely and successfully transfer files with SSL/TLS enabled.

 

Step 8 — Disabling Shell Access (Optional)

If you're unable to use TLS because of client requirements, you can gain some security by disabling the FTP user's ability to log in any other way. One relatively straightforward way to prevent it is by creating a custom shell. This will not provide any encryption, but it will limit the access of a compromised account to files accessible by FTP.

First, open a file called ftponly in the bin directory:

  • sudo nano /bin/ftponly

We'll add a message telling the user why they are unable to log in. Paste in the following:

#!/bin/sh
echo "This account is limited to FTP access only."

Change the permissions to make the file executable:

  • sudo chmod a+x /bin/ftponly

Open the list of valid shells:

  • sudo nano /etc/shells

At the bottom, add:

/etc/shells
. . .
/bin/ftponly

Update the user's shell with the following command:

  • sudo usermod sammy -s /bin/ftponly

Now try logging in as sammy:

  • ssh sammy@203.0.113.0

You should see something like:

Output
This account is limited to FTP access only.
Connection to 203.0.113.0 closed.

This confirms that the user can no longer ssh to the server and is limited to FTP access only.

 

Conclusion

In this tutorial we covered setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd's support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.

Author: Angelo A Vitale
Last update: 2018-12-14 18:21


I want to set www.domain.com to www.domain.com/index.html. How can I do that?

ou will want to edit this file:

sudo nano /etc/apache2/mods-enabled/dir.conf

For example. My website is in PHP, so my dir.conf file looks like this:

<IfModule mod_dir.c>
    DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>

Since you want index.html to be served first, edit your file like this:

<IfModule mod_dir.c>
    DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>

Author: Angelo A Vitale
Last update: 2018-12-14 18:34


Remove unused Ubuntu kernels

his one liner will help you remove unused Ubuntu kernels. Ubuntu does not remove kernels when they install a new one, however the default /boot partition is relatively small, about 100MB. So after 10 kernels, you can get No Space Left On Device errors with apt-get upgrading. Then you can eitehr remove them manually, or use this one liner to automatically remove them all.

export; dpkg --get-selections | grep -E "linux-(header|image).*" | grep -iw install | sort | grep -v "$KERNEL" | grep -v "lts" | sed 's/install//g' | xargs dpkg -P

Here's the command by command explanation:

export

The first portion sets the current kernel number in a variable KERNEL. It only takes the number, and greps out any additions like -generic or -server.

dpkg --get-selections 

The second portion first prints out all available packages.

grep -E "linux-(header|image).*"

The third portion greps for all packages with either linux-header or linux-image in the name.

grep -iw install

The fourth portion greps out only installed packages.

sort

The fifth portion sorts the output.

grep -v "$KERNEL" | grep -v "lts"

The sixth portion filters out the current kernel and the lts kernel package. Removing those will cause problems.

sed 's/install//g'

The seventh part strips off the install part.

xargs dpkg -P

The last part actually removes the packages. xargs send all the package names to dpkg. Then dpkg -P purges the packages. That means, removing them and removing their configs.

Author: Angelo A Vitale
Last update: 2018-12-14 18:35


How To Run Your Own Mail Server with Mail-in-a-Box on Ubuntu 14.04

Introduction

Mail-in-a-Box is an open source software bundle that makes it easy to turn your Ubuntu server into a full-stack email solution for multiple domains.

For securing the server, Mail-in-a-Box makes use of Fail2ban and an SSL certificate (self-signed by default). It auto-configures a UFW firewall with all the required ports open. Its anti-spam and other security features include graylisting, SPF, DKIM, DMARC, opportunistic TLS, strong ciphers, HSTS, and DNSSEC (with DANE TLSA).

Mail-in-a-Box is designed to handle SMTP, IMAP/POP, spam filtering, webmail, and even DNS as part of its all-in-one solution. Since the server itself is handling your DNS, you'll get an off-the-shelf DNS solution optimized for mail. Basically, this means you'll get sophisticated DNS records for your email (including SPF and DKIM records) without having to research and set them up manually. You can tweak your DNS settings afterwards as needed, but the defaults should work very well for most users hosting their own mail.

This tutorial shows how to set up Mail-in-a-Box on a DigitalOcean Droplet running Ubuntu 14.04 x86-64.

 

Prerequisites

Mail-in-a-Box is very particular about the resources that are available to it. Specifically, it requires:

  • An Ubuntu 14.04 x86-64 Droplet
  • The server must have at least 768 MB of RAM (1 GB recommended)
  • Be sure that the server has been set up along the lines given in this tutorial, including adding a sudo user and disabling password SSH access for the root user (and possibly all users if your SSH keys are set up)
  • When setting up the DigitalOcean Droplet, the name should be set to box.example.com. Setting the hostname is discussed later in this tutorial
  • We'll go into more detail later, but your domain registrar needs to support setting custom nameservers and glue records so you can host your own DNS on your Droplet; the term vanity nameservers is frequently used
  • (Optional) Purchase an SSL certificate to use in place of the self-signed one; this is recommended for production environments

On the RAM requirement, the installation script will abort with the following output if the RAM requirement is not met:

Error
Your Mail-in-a-Box needs more memory (RAM) to function properly.
Please provision a machine with at least 768 MB, 1 GB recommended.
This machine has 513 MB memory

Before embarking on this, be sure that you have an Ubuntu server with 1 GB of RAM.

For this article, we'll assume that the domain for which you are setting up an email server is example.com. You are, of course, expected to replace this with your real domain name.

 

Step 1 — Configure Hostname

In this step, you'll learn how to set the hostname properly, if it is not already set. Then you'll modify the /etc/hosts file to match.

From here on, it is assumed that you're logged into your DigitalOcean account and also logged into the server as a sudo user via SSH using:

  • ssh sammy@your_server_ip

Officially, it is recommended that the hostname of your server be set to box.example.com. This should also be the name of the Droplet as it appears on your DigitalOcean dashboard. If the name of the Droplet is set to just the domain name, rename it by clicking on the name of the Droplet, then Settings > Rename.

After setting the name of the Droplet as recommended, verify that it matches what appears in the /etc/hostname file by typing the command:

  • hostname

The output should read something like this:

Output
box.example.com

If the output does not match the name as it appears on your DigitalOcean dashboard, correct it by typing:

  • sudo echo "box.example.com" > /etc/hostname
 

Step 2 — Modify /etc/hosts File

The /etc/hosts file needs to be modified to associate the hostname with the server's IP address. To edit it, open it with nano or your favorite editor using:

  • sudo nano /etc/hosts

Modify the IPv4 addresses, so that they read:

/etc/hosts
127.0.0.1 localhost.localdomain localhost
your_server_ip box.example.com box

You can copy the localhost.localdomain localhost line exactly. Use your own IP and domain on the second line.

Save and close the file.

 

Step 3 — Create Glue Records

While it's possible to have an external DNS service, like that provided by your domain registrar, handle all DNS resolutions for the server, it's strongly recommended to delegate DNS responsibilities to the Mail-in-a-Box server.

That means you'll need to set up glue records when using Mail-in-a-Box. Using glue records makes it easier to securely and correctly set up the server for email. When using this method, it is very important that all DNS responsibilities be delegated to the Mail-in-a-Box server, even if there's an active website using the target domain.

If you do have an active website at your domain, make sure to set up the appropriate additional DNS records on your Mail-in-a-Box server. Otherwise, your domain won't resolve to your website. You can copy your existing DNS records to make sure everything works the same.

Setting up glue records (also called private nameserversvanity nameservers, and child nameservers) has to be accomplished at your domain registrar.

To set up a glue record, the following tasks have to be completed:

  1. Set the glue records themselves. This involves creating custom nameserver addresses that associate the server's fully-qualified hostname, plus the ns1 and ns2 prefixes, with its IP address. These should be as follows:
  • ns1.box.example.com yourserverip
  • ns2.box.example.com yourserverip
  1. Transfer DNS responsibilities to the Mail-in-a-Box server.
  • example.com NS ns1.box.example.com
  • example.com NS ns2.box.example.com

Note: Both tasks must be completed correctly. Otherwise, the server will not be able to function as a mail server. (Alternately, you can set up all the appropriate MX, SPF, DKIM, etc., records on a different nameserver.)

The exact steps involved in this process vary by domain registrar. If the steps given in this article do not match yours, contact your domain registrar's tech support team for assistance.

Example: Namecheap

To start, log into your domain registrar's account. How your domain registrar's account dashboard looks depends on the domain registrar you're using. The example uses Namecheap, so the steps and images used in this tutorial are exactly as you'll find them if you have a Namecheap account. If you're using a different registrar, call their tech support or go through their knowledgebase to learn how to create a glue record.

After logging in, find a list of the domains that you manage and click on the target domain; that is, the one you're about to use to set up the mail server.

Look for a menu item that allows you to modify its nameserver address information. On the Namecheap dashboard, that menu item is called Nameserver Registration under the Advanced Options menu category. You should get an interface that looks like the following:

Modifying the Nameservers

We're going to set up two glue records for the server:

  • ns1.box.example.com
  • ns2.box.example.com

Since only one custom field is provided, they'll have to be configured in sequence. As shown in the image below, type ns1.box where the number 1 appears, then type the IP address of the Mail-in-a-Box server in the IP Address field (indicated by the number 2). Finally, click the Add Nameservers button to add the record (number 3).

Repeat for the other record, making sure to use ns2.box along with the same domain name and IP address.

After both records have been created, look for another menu entry that says Transfer DNS to Webhost. You should get a window that looks just like the one shown in the image below. Select the custom DNS option, then type in the first two fields:

  • ns1.box.example.com
  • ns2.box.example.com

Custom DNS

Click to apply the changes.

Note: The custom DNS servers you type here should be the same as the ones you just specified for the Nameserver Registration.

Changes to DNS take some time to propagate. It could take up to 24 hours, but it took only about 15 minutes for the changes made to the test domain to propagate.

You can verify that the DNS changes have been propagated by visiting whatsmydns.net. Search for the Aand MX records of the target domain. If they match what you set in this step, then you may proceed to Step 4. Otherwise go through this step again or contact your registrar for assistance.

 

Step 4 — Install Mail-in-a-Box

In this step, you'll run the script to install Mail-in-a-Box on your Droplet. The Mail-in-a-Box installation script installs every package required to run a full-blown email server, so all you need to do is run a simple command and follow the prompts.

Assuming you're still logged into the server, move to your home directory:

  • cd ~

Install Mail-in-a-Box:

  • curl -s https://mailinabox.email/bootstrap.sh | sudo bash

The script will prompt you with the introductory message in the following image. Press ENTER.

Mail-in-a-Box Installation

You'll now be prompted to create the first email address, which you'll later use to log in to the system. You could enter contact@example.com or another email address at your domain. Accept or modify the suggested email address, and press ENTER. After that, you'll be prompted to specify and confirm a password for the email account.

Your Email Address

After the email setup, you'll be prompted to confirm the hostname of the server. It should match the one you set in Step 1, which in this example is box.example.com. Press ENTER.

Hostname

Next you'll be prompted to select your country. Select it by scrolling up or down using the arrows keys. Press ENTER after you've made the right choice.

Country Code

At some point, you'll get this prompt:

Output
Okay. I'm about to set up contact@example.com for you. This account will also have access to the box's control panel.
password:

Specify a password for the default email account, which will also be the default web interface admin account.

After installation has completed successfully, you should see some post-installation output that includes:

Output
mail user added
added alias hostmaster@box.example.com (=> administrator@box.example.com)
added alias postmaster@example.com (=> administrator@box.example.com)
added alias admin@example.com (=> administrator@box.example.com)
updated DNS: example.com
web updated

alias added
added alias admin@box.example.com (=> administrator@box.example.com)
added alias postmaster@box.example.com (=> administrator@box.example.com)


-----------------------------------------------

Your Mail-in-a-Box is running.

Please log in to the control panel for further instructions at:

https://your_server_ip/admin

You will be alerted that the website has an invalid certificate. Check that
the certificate fingerprint matches:

1F:C1:EE:C7:C6:2C:7C:47:E8:EF:AC:5A:82:C1:21:67:17:8B:0C:5B

Then you can confirm the security exception and continue.
 

Step 5 — Log In to Mail-in-a-Box Dashboard

Now you'll log in to the administrative interface of Mail-in-a-Box and get to know your new email server. To access the admin interface, use the URL provided in the post-installation output. This should be:

  • https://your_server_ip/admin#

Because HTTPS and a self-signed certificate were used, you will get a security warning in your browser window. You'll have to create a security exception. How that's done depends on the browser you're using.

If you're using Firefox, for example, you will get a browser window with the familiar warning shown in the next image.

To accept the certificate, click the I Understand the Risks button, then on the Add Exception button.

The connection is untrusted in Firefox

On the next screen, you may verify that the certificate fingerprint matches the one in the post-installation output, then click the Confirm Security Exception button.

Add Security Exception in Firefox

After the exception has been created, log in using the username and password of the email account created during installation. Note that the username is the complete email address, like contact@example.com.

When you log in, a system status check is initiated. Mail-in-a-Box will check that all aspects of the server, including the glue records, have been configured correctly. If true, you should see a sea of green (and some yellowish green) text, except for the part pertaining to SSL certificates, which will be in red. You might also see a message about a reboot, which you can take care of.

Note: If there are outputs in red about incorrect DNS MX records for the configured domain, then Step 3 was not completed correctly. Revisit that step or contact your registrar's tech support team for assistance.

If the only red texts you see are because of SSL certificates, congratulations! You have now successfully set up your own mail server using Mail-in-a-Box.

If you want to revisit this section (for example, after waiting for DNS to propagate), it's under System > Status Checks.

 

Step 6 — Access Webmail & Send Test Email

To access the webmail interface, click on Mail > Instructions from the top navigation bar, and access the URL provided on that page. It should be something like this:

  • https://box.example.com/mail

Log in with the email address (include the @example.com part) and password that you set up earlier.

Mail-in-a-box uses Roundcube as its webmail app. Try sending a test email to an external email address. Then, reply or send a new message to the address managed by your Mail-in-a-Box server.

The outgoing email should be received almost immediately, but because graylisting is in effect on the Mail-in-a-Box server, it will take about 15 minutes before incoming email shows up.

This won't work if DNS is not set up correctly.

If you can both send and receive test messages, you are now running your own email server. Congratulations!

 

(Optional) Step 7 — Install SSL Certificate

Mail-in-a-box generates its own self-signed certificate by default. If you want to use this server in a production environment, we highly recommend installing an official SSL certificate.

First, purchase your certificate. Or, to learn how to create a free signed SSL certificate, refer to the How To Set Up Apache with a Free Signed SSL Certificate on a VPS tutorial.

Then, from the Mail-in-a-Box admin dashboard, select System > SSL Certificates from the top navigation menu.

From there, use the Install Certificate button next to the appropriate domain or subdomain. Copy and paste your certificate and any chain certificates into the provided text fields. Finally click the Install button.

Now you and your users should be able to acces webmail and the admin panel without browser warnings.

 

Conclusion

It's easy to keep adding domains and additional email addresses to your Mail-in-a-Box server. To add a new address at a new or existing domain, just add another email account from Mail > Users in the admin dashboard. If the email address is at a new domain, Mail-in-a-box will automatically add appropriate new settings for it.

If you're adding a new domain, make sure you set the domain's nameservers to ns1.box.example.com and ns2.box.example.com (the same ones we set up earlier for the first domain) at your domain registrar. Your Droplet will handle all of the DNS for the new domain.

To see the current DNS settings, visit System > External DNS. To add your own entries, visit System > Custom DNS.

Mail-in-a-Box also provides functionality beyond the scope of this article. It can serve as a hosted contact and calendar manager courtesy of ownCloud. It can also be used to host static websites.

Further information about Mail-in-a-Box is available at the project's home page.


https://www.digitalocean.com/community/tutorials/how-to-run-your-own-mail-server-with-mail-in-a-box-on-ubuntu-14-04

Author: Angelo A Vitale
Last update: 2018-12-15 20:19


Ubuntu, get all updates with one command

sudo -- sh -c 'apt-get update; apt-get upgrade -y; apt-get dist-upgrade -y; apt-get autoremove -y; apt-get autoclean -y'

Author: Angelo A Vitale
Last update: 2018-12-16 17:05


Restart Apache

sudo service apache2 restart

Author: Angelo A Vitale
Last update: 2018-12-19 04:11


How To Install Nagios 4 and Monitor Your Servers on Ubuntu 14.04

How To Install Nagios 4 and Monitor Your Servers on Ubuntu 14.04

Introduction

In this tutorial, we will cover the installation of Nagios 4, a very popular open source monitoring system, on Ubuntu 14.04. We will cover some basic configuration, so you will be able to monitor host resources via the web interface. We will also utilize the Nagios Remote Plugin Executor (NRPE), that will be installed as an agent on remote hosts, to monitor their local resources.

Nagios is useful for keeping an inventory of your servers, and making sure your critical services are up and running. Using a monitoring system, like Nagios, is an essential tool for any production server environment.

 Prerequisites

To follow this tutorial, you must have superuser privileges on the Ubuntu 14.04 server that will run Nagios. Ideally, you will be using a non-root user with superuser privileges. If you need help setting that up, follow the steps 1 through 3 in this tutorial: Initial Server Setup with Ubuntu 14.04.

A LAMP stack is also required. Follow this tutorial if you need to set that up: How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 14.04.

This tutorial assumes that your server has private networking enabled. If it doesn't, just replace all the references to private IP addresses with public IP addresses.

Now that we have the prerequisites sorted out, let's move on to getting Nagios 4 installed.

 Install Nagios 4

This section will cover how to install Nagios 4 on your monitoring server. You only need to complete this section once.

Create Nagios User and Group

We must create a user and group that will run the Nagios process. Create a "nagios" user and "nagcmd" group, then add the user to the group with these commands:

  • sudo useradd nagios
  • sudo groupadd nagcmd
  • sudo usermod -a -G nagcmd nagios

Install Build Dependencies

Because we are building Nagios Core from source, we must install a few development libraries that will allow us to complete the build. While we're at it, we will also install apache2-utils, which will be used to set up the Nagios web interface.

First, update your apt-get package lists:

  • sudo apt-get update

Then install the required packages:

  • sudo apt-get install build-essential libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

Let's install Nagios now.

Install Nagios Core

Download the source code for the latest stable release of Nagios Core. Go to the Nagios downloads page, and click the Skip to download link below the form. Copy the link address for the latest stable release so you can download it to your Nagios server.

At the time of this writing, the latest stable release is Nagios 4.1.1. Download it to your home directory with curl:

cd ~
curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz

Extract the Nagios archive with this command:

  • tar xvf nagios-*.tar.gz

Then change to the extracted directory:

  • cd nagios-*

Before building Nagios, we must configure it. If you want to configure it to use postfix (which you can install with apt-get), add --with-mail=/usr/sbin/sendmail to the following command:

  • ./configure --with-nagios-group=nagios --with-command-group=nagcmd

Now compile Nagios with this command:

  • make all

Now we can run these make commands to install Nagios, init scripts, and sample configuration files:

  • sudo make install
  • sudo make install-commandmode
  • sudo make install-init
  • sudo make install-config
  • sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

In order to issue external commands via the web interface to Nagios, we must add the web server user, www-data, to the nagcmd group:

  • sudo usermod -G nagcmd www-data

Install Nagios Plugins

Find the latest release of Nagios Plugins here: Nagios Plugins Download. Copy the link address for the latest version, and copy the link address so you can download it to your Nagios server.

At the time of this writing, the latest version is Nagios Plugins 2.1.1. Download it to your home directory with curl:

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz

Extract Nagios Plugins archive with this command:

  • tar xvf nagios-plugins-*.tar.gz

Then change to the extracted directory:

  • cd nagios-plugins-*

Before building Nagios Plugins, we must configure it. Use this command:

  • ./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now compile Nagios Plugins with this command:

  • make

Then install it with this command:

  • sudo make install

Install NRPE

Find the source code for the latest stable release of NRPE at the NRPE downloads page. Download the latest version to your Nagios server.

At the time of this writing, the latest release is 2.15. Download it to your home directory with curl:

  • cd ~
  • curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz

Extract the NRPE archive with this command:

  • tar xvf nrpe-*.tar.gz

Then change to the extracted directory:

  • cd nrpe-*

Configure NRPE with these commands:

  • ./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Now build and install NRPE and its xinetd startup script with these commands:

  • make all
  • sudo make install
  • sudo make install-xinetd
  • sudo make install-daemon-config

Open the xinetd startup script in an editor:

  • sudo vi /etc/xinetd.d/nrpe

Modify the only_from line by adding the private IP address of the your Nagios server to the end (substitute in the actual IP address of your server):

only_from = 127.0.0.1 10.132.224.168

Save and exit. Only the Nagios server will be allowed to communicate with NRPE.

Restart the xinetd service to start NRPE:

  • sudo service xinetd restart

Now that Nagios 4 is installed, we need to configure it.

 

Configure Nagios

Now let's perform the initial Nagios configuration. You only need to perform this section once, on your Nagios server.

Organize Nagios Configuration

Open the main Nagios configuration file in your favorite text editor. We'll use vi to edit the file:

sudo vi /usr/local/nagios/etc/nagios.cfg

Now find an uncomment this line by deleting the #:

#cfg_dir=/usr/local/nagios/etc/servers

Save and exit.

Now create the directory that will store the configuration file for each server that you will monitor:

sudo mkdir /usr/local/nagios/etc/servers

Configure Nagios Contacts

Open the Nagios contacts configuration in your favorite text editor. We'll use vi to edit the file:

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Find the email directive, and replace its value (the highlighted part) with your own email address:

email                           nagios@localhost        ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

Save and exit.

Configure check_nrpe Command

Let's add a new command to our Nagios configuration:

  • sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add the following to the end of the file:

define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Save and exit. This allows you to use the check_nrpe command in your Nagios service definitions.

Configure Apache

Enable the Apache rewrite and cgi modules:

sudo a2enmod rewrite
sudo a2enmod cgi

Use htpasswd to create an admin user, called "nagiosadmin", that can access the Nagios web interface:

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Enter a password at the prompt. Remember this password, as you will need it to access the Nagios web interface.

Note: If you create a user that is not named "nagiosadmin", you will need to edit /usr/local/nagios/etc/cgi.cfg and change all the "nagiosadmin" references to the user you created.

Now create a symbolic link of nagios.conf to the sites-enabled directory:

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/

Nagios is ready to be started. Let's do that, and restart Apache:

sudo service nagios start
sudo service apache2 restart

To enable Nagios to start on server boot, run this command:

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Optional: Restrict Access by IP Address

If you want to restrict the IP addresses that can access the Nagios web interface, you will want to edit the Apache configuration file:

sudo vi /etc/apache2/sites-available/nagios.conf

Find and comment the following two lines by adding # symbols in front of them:

Order allow,deny
Allow from all

Then uncomment the following lines, by deleting the # symbols, and add the IP addresses or ranges (space delimited) that you want to allow to in the Allow from line:

#  Order deny,allow
#  Deny from all
#  Allow from 127.0.0.1

As these lines will appear twice in the configuration file, so you will need to perform these steps once more.

Save and exit.

Now restart Apache to put the change into effect:

sudo service nagios restart
sudo service apache2 restart

Nagios is now running, so let's try and log in.

 

Accessing the Nagios Web Interface

Open your favorite web browser, and go to your Nagios server (substitute the IP address or hostname for the highlighted part):

http://nagios_server_public_ip/nagios

Because we configured Apache to use htpasswd, you must enter the login credentials that you created earlier. We used "nagiosadmin" as the username:

htaccess Authentication Prompt

After authenticating, you will be see the default Nagios home page. Click on the Hosts link, in the left navigation bar, to see which hosts Nagios is monitoring:

Nagios Hosts Page

As you can see, Nagios is monitoring only "localhost", or itself.

Let's monitor another host with Nagios!

 

Monitor a Host with NRPE

In this section, we'll show you how to add a new host to Nagios, so it will be monitored. Repeat this section for each server you wish to monitor.

On a server that you want to monitor, update apt-get:

sudo apt-get update

Now install Nagios Plugins and NRPE:

sudo apt-get install nagios-plugins nagios-nrpe-server

Configure Allowed Hosts

Now, let's update the NRPE configuration file. Open it in your favorite editor (we're using vi):

sudo vi /etc/nagios/nrpe.cfg

Find the allowed_hosts directive, and add the private IP address of your Nagios server to the comma-delimited list (substitute it in place of the highlighted example):

allowed_hosts=127.0.0.1,10.132.224.168

Save and exit. This configures NRPE to accept requests from your Nagios server, via its private IP address.

Configure Allowed NRPE Commands

Look up the name of your root filesystem (because it is one of the items we want to monitor):

df -h /

We will be using the filesystem name in the NRPE configuration to monitor your disk usage (it is probably /dev/vda). Now open nrpe.cfg for editing:

sudo vi /etc/nagios/nrpe.cfg

The NRPE configuration file is very long and full of comments. There are a few lines that you will need to find and modify:

  • server_address: Set to the private IP address of this host
  • allowed_hosts: Set to the private IP address of your Nagios server
  • command[check_hda1]: Change /dev/hda1 to whatever your root filesystem is called

The three aforementioned lines should look like this (substitute the appropriate values):

server_address=client_private_IP
allowed_hosts=nagios_server_private_IP
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vda

Note that there are several other "commands" defined in this file that will run if the Nagios server is configured to use them. Also note that NRPE will be listening on port 5666 because server_port=5666 is set. If you have any firewalls blocking that port, be sure to open it to your Nagios server.

Save and quit.

Restart NRPE

Restart NRPE to put the change into effect:

sudo service nagios-nrpe-server restart

Once you are done installing and configuring NRPE on the hosts that you want to monitor, you will have to add these hosts to your Nagios server configuration before it will start monitoring them.

Add Host to Nagios Configuration

On your Nagios server, create a new configuration file for each of the remote hosts that you want to monitor in /usr/local/nagios/etc/servers/. Replace the highlighted word, "yourhost", with the name of your host:

sudo vi /usr/local/nagios/etc/servers/yourhost.cfg

Add in the following host definition, replacing the host_name value with your remote hostname ("web-1" in the example), the alias value with a description of the host, and the address value with the private IP address of the remote host:

define host {
        use                             linux-server
        host_name                       yourhost
        alias                           My first Apache server
        address                         10.132.234.52
        max_check_attempts              5
        check_period                    24x7
        notification_interval           30
        notification_period             24x7
}

With the configuration file above, Nagios will only monitor if the host is up or down. If this is sufficient for you, save and exit then restart Nagios. If you want to monitor particular services, read on.

Add any of these service blocks for services you want to monitor. Note that the value of check_command determines what will be monitored, including status threshold values. Here are some examples that you can add to your host's configuration file:

Ping:

define service {
        use                             generic-service
        host_name                       yourhost
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
}

SSH (notifications_enabled set to 0 disables notifications for a service):

define service {
        use                             generic-service
        host_name                       yourhost
        service_description             SSH
        check_command                   check_ssh
        notifications_enabled           0
}

If you're not sure what use generic-service means, it is simply inheriting the values of a service template called "generic-service" that is defined by default.

Now save and quit. Reload your Nagios configuration to put any changes into effect:

sudo service nagios reload

Once you are done configuring Nagios to monitor all of your remote hosts, you should be set. Be sure to access your Nagios web interface, and check out the Services page to see all of your monitored hosts and services:

Nagios Services Page

 

Conclusion

Now that you monitoring your hosts and some of their services, you might want to spend some time to figure out which services are critical to you, so you can start monitoring those. You may also want to set up notifications so, for example, you receive an email when your disk utilization reaches a warning or critical threshold or your main website is down, so you can resolve the situation promptly or before a problem even occurs.

https://www.digitalocean.com/community/tutorials/how-to-install-nagios-4-and-monitor-your-servers-on-ubuntu-14-04

Author: Angelo A Vitale
Last update: 2018-12-25 11:29


How to Install Nagios Server Monitoring on Ubuntu 16.04

How to Install Nagios Server Monitoring on Ubuntu 16.04

Nagios is an open source software for system and network monitoring. Nagios can monitor the activity of a host and its services, and provides a warning/alert if something bad happens on the server. Nagios can run on Linux operating systems. At this time, I'm using Ubuntu 16.04 for the installation.

 

Prerequisites

  • 2 Ubuntu 16.04 - 64bit servers
    • 1 - Nagios Host with IP: 192.168.1.9
    • 2 - Ubuntu Client with IP: 192.168.1.10
  • Root/Sudo access

What we will do in this tutorial:

  1. Software the package dependencies like - LAMP etc.
  2. User and group configuration.
  3. Installing Nagios.
  4. Configuring Apache.
  5. Testing the Nagios Server.
  6. Adding a Host to Monitor.

 

Installing the prerequisites

Nagios requires the gcc compiler and build-essentials for the compilation, LAMP (Apache, PHP, MySQL) for the Nagios web interface and Sendmail to send alerts from the server. To install all those packages, run this command (it's just 1 line):

sudo apt-get install wget build-essential apache2 php apache2-mod-php7.0 php-gd libgd-dev sendmail unzip

 

User and group configuration

For Nagios to run, you have to create a new user for Nagios. We will name the user "nagios" and additionally create a group named "nagcmd". We add the new user to the group as shown below:

useradd nagios
groupadd nagcmd
usermod -a -G nagcmd nagios
usermod -a -G nagios,nagcmd www-data

Adding the Nagios user

 

Installing Nagios

Step 1 - Download and extract the Nagios core

cd ~
wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.2.0.tar.gz
tar -xzf nagios*.tar.gz
cd nagios-4.2.0

Step 2 - Compile Nagios

Before you build Nagios, you will have to configure it with the user and the group you have created earlier.

./configure --with-nagios-group=nagios --with-command-group=nagcmd

For more information please use: ./configure --help .

Now to install Nagios:

make all
sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
/usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

And copy evenhandler directory to the nagios directory:

cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/
chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers

Step 3 - Install the Nagios Plugins

Download and extract the Nagios plugins:

cd ~
wget https://nagios-plugins.org/download/nagios-plugins-2.1.2.tar.gz
tar -xzf nagios-plugins*.tar.gz
cd nagios-plugin-2.1.2/

Install the Nagios plugin's with the commands below:

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl
make
make install

Step 4 - Configure Nagios

After the installation phase is complete, you can find the default configuration of Nagios in /usr/local/nagios/.

We will configure Nagios and Nagios contact.

Edit default nagios configuration with vim:

vim /usr/local/nagios/etc/nagios.cfg

uncomment line 51 for the host monitor configuration.

cfg_dir=/usr/local/nagios/etc/servers

Save and exit.

Add a new folder named servers:

mkdir -p /usr/local/nagios/etc/servers

The Nagios contact can be configured in the contact.cfg file. To open it use:

vim /usr/local/nagios/etc/objects/contacts.cfg

Then replace the default email with your own email.

Set email address.

 

Configuring Apache

Step 1 - enable Apache modules

sudo a2enmod rewrite
sudo a2enmod cgi

You can use the htpasswd command to configure a user nagiosadmin for the nagios web interface

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

and type your password.

Step 2 - enable the Nagios virtualhost

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/

Step 3 - Start Apache and Nagios

service apache2 restart
service nagios start

When Nagios starts, you may see the following error :

Starting nagios (via systemctl): nagios.serviceFailed

And this is how to fix it:

cd /etc/init.d/
cp /etc/init.d/skeleton /etc/init.d/nagios

Now edit the Nagios file:

vim /etc/init.d/nagios

... and add the following code:

DESC="Nagios"
NAME=nagios
DAEMON=/usr/local/nagios/bin/$NAME
DAEMON_ARGS="-d /usr/local/nagios/etc/nagios.cfg"
PIDFILE=/usr/local/nagios/var/$NAME.lock

Make it executable and start Nagios:

chmod +x /etc/init.d/nagios
service apache2 restart
servuce nagios start

 

Testing the Nagios Server

Please open your browser and access the Nagios server ip, in my case: http://192.168.1.9/nagios.

Nagios Login with apache htpasswd.

Nagios Login

Nagios Admin Dashboard

Nagios Dashboard

 

Adding a Host to Monitor

In this tutorial, I will add an Ubuntu host to monitor to the Nagios server we have made above.

Nagios Server IP : 192.168.1.9
Ubuntu Host IP : 192.168.1.10

Step 1 - Connect to ubuntu host

ssh root@192.168.1.10

Step 2 - Install NRPE Service

sudo apt-get install nagios-nrpe-server nagios-plugins

Step 3 - Configure NRPE

After the installation is complete, edit the nrpe file /etc/nagios/nrpe.cfg:

vim /etc/nagios/nrpe.cfg

... and add Nagios Server IP 192.168.1.9 to the server_address.

server_address=192.168.1.9

Configure server address

Step 4 - Restart NRPE

service nagios-nrpe-server restart

Step 5 - Add Ubuntu Host to Nagios Server

Please connect to the Nagios server:

ssh root@192.168.1.9

Then create a new file for the host configuration in /usr/local/nagios/etc/servers/.

vim /usr/local/nagios/etc/servers/ubuntu_host.cfg

Add the following lines:

# Ubuntu Host configuration file

define host {
        use                          linux-server
        host_name                    ubuntu_host
        alias                        Ubuntu Host
        address                      192.168.1.10
        register                     1
}

define service {
      host_name                       ubuntu_host
      service_description             PING
      check_command                   check_ping!100.0,20%!500.0,60%
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Check Users
      check_command           check_local_users!20!50
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Local Disk
      check_command                   check_local_disk!20%!10%!/
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Check SSH
      check_command                   check_ssh
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Total Process
      check_command                   check_local_procs!250!400!RSZDT
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

You can find many check_command in /usr/local/nagios/etc/objects/commands.cfg file. See there if you want to add more services like DHCP, POP etc.

And now check the configuration:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

... to see if the configuration is correct.

Step 6 - Restart all services

On the Ubuntu Host start NRPE Service:

service nagios-nrpe-server restart

... and on the Nagios server, start Apache and Nagios:

service apache2 restart
service nagios restart

Step 7 - Testing the Ubuntu Host

Open the Nagios server from the browser and see the ubuntu_host being monitored.

The Ubuntu host is available on monitored host.

Monitored server is listed

All services monitored without error.

All services are green

Conclusion

Nagios is an open source application for monitoring a system. Nagios has been widely used because of the ease of configuration. Nagios in support by various plugins, and you can even create your own plugins. Look here for more information.

Author: Angelo A Vitale
Last update: 2018-12-25 01:41


Install InvoicePlane On Ubuntu 16.04 LTS With Apache2, MariaDB And PHP 7.1 Support

When deciding on free open source quotes, invoices and payment platform, don’t ignore InvoicePlane… This free open source invoicing and payment platform based on PHP can get the job  done…

InvoicePlane is an open source, self-hosted application for managing quotes, invoices and payments based on PHP. It is designed from the ground up for ease of use to allow business owners create and manage and track their business quotes, invoices and client payments.

If you’re looking for a robust, secure and easy to use invoicing system that’s 100% free, you’ll find InvoicePlane to be useful. This brief tutorial is going to show students and new users how to install InvoicePlane on Ubuntu 16.04 LTS with Apache2, MariaDB and PHP 7.1 support.

This post covers installing the latest version of InvoicePlane, which at the time of writing was  v1.5.5

To get started with installing InvoicePlane, follow the steps below:

Step 1: Install Apache2 Web Server

InvoicePlane requires a webserver to function and the most popular webserver in use today is Apache2. So, go and install Apache2 on Ubuntu by running the commands below:

sudo apt update
sudo apt install apache2

After installing Apache2, run the commands below to disable directory listing globally.

sudo sed -i "s/Options Indexes FollowSymLinks/Options FollowSymLinks/" /etc/apache2/apache2.conf

Next, run the commands below to stop, start and enable Apache2 service to always start up with the server boots.

sudo systemctl stop apache2.service
sudo systemctl start apache2.service
sudo systemctl enable apache2.service

Step 2: Install MariaDB Database Server

InvoicePlane also requires a database server to function.. and MariaDB database server is a great place to start. To install it run the commands below.

sudo apt-get install mariadb-server mariadb-client

After installing, the commands below can be used to stop, start and enable MariaDB service to always start up when the server boots.

sudo systemctl stop mysql.service
sudo systemctl start mysql.service
sudo systemctl enable mysql.service

After that, run the commands below to secure MariaDB server.

sudo mysql_secure_installation

When prompted, answer the questions below by following the guide.

  • Enter current password for root (enter for none): Just press the Enter
  • Set root password? [Y/n]: Y
  • New password: Enter password
  • Re-enter new password: Repeat password
  • Remove anonymous users? [Y/n]: Y
  • Disallow root login remotely? [Y/n]: Y
  • Remove test database and access to it? [Y/n]:  Y
  • Reload privilege tables now? [Y/n]:  Y

Restart MariaDB server

sudo systemctl restart mysql.service

Step 3: Install PHP 7.1 and Related Modules

PHP 7.1 isn’t available on Ubuntu default repositories… in order to install it, you will have to get it from third-party repositories.

Run the commands below to add the below third party repository to upgrade to PHP 7.1

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ondrej/php

Then update and upgrade to PHP 7.1

sudo apt update

Run the commands below to install PHP 7.1 and related modules.

sudo apt install php7.1 libapache2-mod-php7.1 php7.1-common php7.1-mbstring php7.1-xmlrpc php7.1-soap php7.1-gd php7.1-xml php7.1-intl php7.1-mysql php7.1-cli php7.1-mcrypt php7.1-zip php7.1-curl

After install PHP 7.1, run the commands below to open PHP-FPM default file.

sudo nano /etc/php/7.1/apache2/php.ini

Then make the change the following lines below in the file and save.

file_uploads = On
allow_url_fopen = On
memory_limit = 256M
upload_max_filesize = 64M
max_execution_time = 360
date.timezone = America/Chicago

Step 4: Create InvoicePlane Database

Now that you’ve install all the packages that are required, continue below to start configuring the servers. First go and create a blank InvoicePlane database.

Run the commands below to logon to the database server. When prompted for a password, type the root password you created above.

sudo mysql -u root -p

Then create a database called invplanedb

CREATE DATABASE invplanedb;

Create a database user called invplaneuser with new password

CREATE USER 'invplaneuser'@'localhost' IDENTIFIED BY 'new_password_here';

Then grant invplaneuser full access to the database.

GRANT ALL ON invplanedb.* TO 'invplaneuser'@'localhost' IDENTIFIED BY 'user_password_here' WITH GRANT OPTION;

Finally, save your changes and exit.

FLUSH PRIVILEGES;
EXIT;

Step 5: Download InvoicePlane Latest Release

Next, visit InvoicePlane site and download the latest version.

After downloading, run the commands below to create a root directory for InvoicePlane and extract the downloaded file into Apache2 root directory.

cd /tmp && wget -c -O v1.5.5.zip https://invoiceplane.com/download/v1.5.5
unzip v1.5.5.zip
sudo mv ip /var/www/html/invoiceplane

Next, run the commands below to create InvoicePlane default config and .htaccess files.

sudo cp /var/www/html/invoiceplane/ipconfig.php.example /var/www/html/invoiceplane/ipconfig.php
sudo cp /var/www/html/invoiceplane/htaccess /var/www/html/invoiceplane/.htaccess

Then run the commands below to set the correct permissions for InvoicePlane to function.

sudo chown -R www-data:www-data /var/www/html/invoiceplane/
sudo chmod -R 755 /var/www/html/invoiceplane/

Step 6: Configure Apache2

Finally, configure Apahce2 site configuration file for InvoicePlane. This file will control how users access InvoicePlane content. Run the commands below to create a new configuration file called invoiceplane.conf

sudo nano /etc/apache2/sites-available/invoiceplane.conf

Then copy and paste the content below into the file and save it. Replace the highlighted line with your own domain name and directory root location.

<VirtualHost *:80>
     ServerAdmin admin@example.com
     DocumentRoot /var/www/html/invoiceplane
     ServerName example.com
     ServerAlias www.example.com

     <Directory /var/www/html/invoiceplane/>
        Options +FollowSymlinks
        AllowOverride All
        Require all granted
     </Directory>

     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save the file and exit.

Step 7: Enable the InvoicePlane and Rewrite Module

After configuring the VirtualHost above, enable it by running the commands below

sudo a2ensite invoiceplane.conf
sudo a2enmod rewrite

Step 8 : Restart Apache2

To load all the settings above, restart Apache2 by running the commands below.

sudo systemctl restart apache2.service

Then open your browser and browse to the server domain name. You should see InvoicePlane setup wizard to complete. Please follow the wizard carefully.

http://example.com/setup/

Then follow the on-screen instructions… you will be asked to input your database configuration, administrative details and other configuration settings. When complete you may sign-in and start using InvoicePlane.


https://websiteforstudents.com/install-invoiceplane-on-ubuntu-16-04-lts-with-apache2-mariadb-and-php-7-1-support/

Author: Angelo A Vitale
Last update: 2018-12-27 20:12


Apple (MAC)

How to roll back from macOS Mojave to High Sierra

downgrade-drom-mojave

If you’ve installed macOS Mojave to take it for a test drive and decided you don’t like it, or it doesn’t work with some of your apps, and you want to downgrade from Mojave to High Sierra, the good news is that it’s possible. The bad news, though, is that it’s quite a long process with lots of different steps. We recommend that you read the guide below carefully before you start.

Do you really need to downgrade?

If you’ve decided to downgrade because Mojave is running slowly, you could try improving its performance first, by getting rid of unwanted files. CleanMyMac X  scans for junk files, such as those created by iTunes, the Photos app, and the Mac’s own system software. You can then preview what it’s found and recommends you delete and decide for yourself what you want to get rid of, or you can just press a button and have it delete everything it’s found. You might find that just by deleting these file, performance improves considerably. Moreover, the app has a special Optimization and Maintenance tools designed to improve your Mac's speed. You can get started with CleanMyMac very quickly by downloading it here (for free).

How to downgrade from macOS Mojave to macOS High Sierra

If you've decided that you still want to go back to High Sierra, follow the steps below. And please note that the process of downgrading is quite complicated and time-consuming, so try to be patient. 

Step 1: Back up your Mac

You should back up your Mac before you start any major process, and hopefully you backed up before installing Mojave. If you’re unsure how to back up your Mac, you can follow the steps in this article. However, any files you’ve used or been working on since you installed Mojave won’t be up to date on that back up, so you need to copy those to an external disk or a cloud storage service like iCloud Drive or Dropbox. Don’t do anything else until you’ve copied those files.

Step 2: Make notes

The process of downgrading wipes everything from your hard drive, including passwords, license keys and settings. If you have a backup of your Mac from before you upgraded to Mojave, you should be able to migrate much of that data back to your Mac once you’ve reinstalled High Sierra. However, it’s a good idea to make sure you have a note of all the passwords, settings, licence keys and other data you’re likely to need. If you use a password manager that syncs with other devices, you could use that to store all the data you need. Otherwise, any cloud-based note-taking tool that encrypts notes will do.

It’s also a good idea to make screenshots of settings, to make it easier to set them back up later on. You should store these on an external disk, or cloud storage space.

Step 3: Erase Mojave

Once you’ve backed up the files you’ve worked on since installing Mojave, and created the bootable installer, it’s time to erase Mojave.

  1. Make sure your Mac is connected to the internet.
  2. Click on the Apple menu and choose Restart.
  3. Hold down Command+Option+Shift+R to boot into recovery mode. Note, you can also boot into Recovery mode by pressing Command+R. However, adding Option+Shift will allow you to reinstall High Sierra, if your Mac came with it installed.
  4. Click on Disk Utility in the macOS Utilities window.
  5. Select the disk with Mojave on it.
  6. Choose Erase.
  7. Give the disk a name, choose Mac OS Extended (Journaled) or APFS as the file format. 
  8. Click Erase.
  9. Quit Disk Utility.

erase-mojave

How to downgrade from macOS Mojave if your Mac shipped with High Sierra

  1. Erase your startup disk as described above — you need to do that first because Recovery mode won’t install an older version of the OS over a newer version.
  2. From macOS Utilities, choose Reinstall macOS.
  3. Press Continue.

How to downgrade from a Time Machine backup

If you made a backup of your Mac just before installing Mojave, you’re in luck. You can use that to reinstall High Sierra. Make sure your Time Machine disk is connected to your Mac, either directly or over a network, before you start.

  1. Erase your startup disk, as described above.
  2. In the macOS Utilities window, choose Restore from Time Machine Backup.
  3. If your backup is on an external disk, select it. If it’s on a Time Capsule or network disk, select it and choose Connect to Remote Disk.
  4. Type in your name and password for the disk, if necessary.
  5. Select the date and time of the backup you want to restore to.
  6. Follow the onscreen instructions.

How to downgrade using a bootable High Sierra installer

If your Mac didn’t ship with High Sierra and you don’t have a Time Machine backup, you’ll need to create an installer disk. Apple used to make all previous versions of macOS available in the Purchased tab of the Store, but the most recent version there now is El Capitan.

If you’re downgrading before the full public release of macOS Mojave, High Sierra is still available in the App Store. You can find it on the left hand side of the main App Store window, or by search for it.

Note: If you want to downgrade Mojave after its final release and haven’t already created a bootable installer of High Sierra, you’re out of luck. You’ll have to create a bootable installer of El Capitan or use Recovery Mode to roll back to the most recent version of macOS installed on your Mac. To do that, use Command+Option+R when you boot into Recovery mode (see Erase Mojave, above) instead of Command+Option+Shift+R. For that reason, if you’re reading this before Mojave has been released, it’s worth downloading High Sierra now, just in case.

Click the Download button on the App Store page and wait for the OS to download. If the installer automatically launches when it’s downloaded, quit it.

  1. You’ll need an external hard disk or SSD, or a USB stick that’s at least 12GB to create the installer.
  2. Plug the external drive or USB stick into your Mac.
  3. In the Finder, click on the Go menu, select Utilities.
  4. Launch Disk Utility from the Utilities folder.
  5. Click on the external disk in the sidebar and choose the Erase tab.
  6. Give the drive the name ‘MyVolume’ in the Erase window, set the format to Mac OS Extended (Journaled) or APFS.
  7. Click Erase.
  8. Press Done when it’s finished.
  9. Quit Disk Utility.

Go back to the Utilities folder in the Finder and this time, launch Terminal.

  1. Type the following command: sudo /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume --applicationpath /Applications/Install\ macOS\ High\ Sierra.app
  2. Hit the Return key.
  3. Type in an administrator account password for your Mac.
  4. Wait for the word ‘Done’ to appear in the Terminal window.

bootable-high-sierra-installer

Step 4: Reinstall High Sierra

  1. Go to the Apple menu, choose Restart, and hold down the Option key.
  2. When the option to select a boot disk appears, choose the installer disk you just created. 
  3. High Sierra will begin installing on your Mac.
  4. When it’s finished, your Mac will restart and Startup Assistant will appear.
  5. Go through the steps to set up your Mac.

Step 5: Restore settings

If you made a non-Time Machine backup of your Mac before installing Mojave, you can use the backup tool to restore your Mac to the state it was in when you made the backup.

Otherwise, you’ll need to reinstall apps manually, using the notes you made earlier to enter licence codes and re-create settings. You can also copy back files that you backed up when you were running macOS Mojave. 


How to keep your fresh installation clean

You’ll notice when you revert to a clean installation of High Sierra, that your Mac seems to be running more quickly and encountering fewer problems than it did previously. Part of the reason for that is that, as you use your Mac, it accumulates lots of temporary files, cache files and other ‘junk’ that can cause performance and compatibility problems.

It could also be because your previous installation, along with all the files and applications you had installed, was occupying more than 90% of your Mac’s startup drive. MacOS uses your startup drive to store data temporarily, as a proxy for keeping it in RAM. If you don’t have enough free storage space, you will start to see performance problems.

The solution is to regularly clear out junk files and to audit your Applications, uninstalling any you no longer use. We recommend CleanMyMac X for both tasks. CleanMyMac makes it easy to uninstall apps with a couple of clicks. And when it does so, it doesn’t just remove the application itself — which is what happens if you just drag it to the Trash — it also tracks down and removes all the application’s associated files in your user Library and gets rid of those too.


As you can see, downgrading from Mojave to High Sierra could be quite simple or it could be a long drawn out process, depending on you do it. If your Mac came with High Sierra, you’re in luck, because you can use Recovery Mode to roll back — though you’ll need to erase your startup disk first. Likewise, if you have a Time Machine backup of your High Sierra installation just before you installed Mojave. If neither of those applies, your only option is to create an installer disk from the App Store. Whichever method you use, once you’ve reinstalled High Sierra, it’s worth using CleanMyMac (get its free version here) to keep your clean installation fresh and performing as well as it can.

https://macpaw.com/how-to/downgrade-from-mojave

Author: Angelo A Vitale
Last update: 2018-12-31 00:01


How to uninstall ITSM Agent from MAC

Endpoint manager allows users to remove the endpoint manager agent from enrolled mac devices by "Delete Device" option in the Device Management section. This procedure guides you to  remove the endpoint manager agent using console and also from the endpoint mac devices.

1.Remove endpoint manager agent of MAC OS X from endpoint manager portal

Step[1]: Go to endpoint manager → device management → device list . Select the mac device that needs to be removed and Click "More" option then Click "Delete device"

Step[2]: Click "Confirm" shown in the popup.

Step[3]: The mac device will be removed from the device list
Note: Endpoint manager agent and endpoint manager profile will be automatically uninstalled from the mac device.

2.Remove endpoint manager agent of MAC OS X locally

Method 1: Agent can be uninstalled in the application folder of the end point MAC device.

Step[1]: Go to Application folder in endpoint MAC device, CDMAgent is visible.

 

 Step[2]: Right-click the CDMAgent icon, then click "move to trash" option.

 

 Step[3]: Enter the password in the popup then click ok, CDMAgent will be removed from Application folder.

 

 

Method 2: Endpoint Manager agent can be uninstalled by removing endpoint manager profile in system preference.

 

 Step[2]: Go to "profiles" folder.

 

Step[3]: Endpoint manager agent profile "endpoint manager" is visible, Click "–" in the bottom, then click "remove" option in the popup, now endpoint manager agent will be uninstalled automatically.

 

 

 Method 3:Endpoint manager agent can be uninstalled by "Quit" option

 

Author: Angelo A Vitale
Last update: 2019-01-27 22:04


Backup

How to Clone a Hard Drive

If you need to migrate your data or are looking to keep a backup handy, you can clone your hard drive. Here's how to do it in Windows and on a Mac.
 How to Clone a Hard Drive

There are plenty of great services that can back up your files, but sometimes you need something a bit more bulletproof. Maybe you're migrating your Windows installation to a new hard drive, or maybe you want a complete 1-to-1 copy in case anything goes wrong. In those cases, your best bet is to clone your hard drive, creating an exact copy that you can swap in and boot up right away.

Some backup services, like IDrive and Acronis, have disk-cloning features built in, supplementing to the normal file backup. We'll be using some free tools designed specifically for drive cloning in this guide, though. If you want a true backup solution with supplemental cloning features, check out one of the paid options. But for one-off clones (like if you're migrating your OS to a new drive), these tools will be all you need.

  • How to Clone a Hard Drive

    Connect Your Secondary Drive

    For this process, you'll obviously need two drives: the source drive (with the data you want to clone), and the destination drive (where you're cloning that data to). If you have a desktop computer and both drives are installed internally (or you're just cloning to a USB external drive for backup), great! You're ready to continue.

     

    If, however, you're using a laptop with only one drive bay, you'll need an external SATA-to-USB adapterdock, or enclosure to connect your bare drive to the computer. Once you've connected your drive, you can go through the cloning process, then disconnect it and install the drive internally.

    In most cases, your destination drive will probably need to be as large as, or larger than, your source drive. If it isn't, you'll need to free up space on your source drive and shrink the main partition down to fit. (You'll probably only need to do this if you're migrating from a hard drive to a smaller SSD—and we have a separate guide on that process here.)

  • Windows Users: Clone Your Drive with Macrium Reflect Free

    Windows users have lots of great cloning tools available, but we'll be using Macrium Reflect Free. It's free, easy to use, and widely loved by many, so it's hard to go wrong.

    To install Macrium Reflect, download the "Home Use" installer from this page and start it up. It's just a tiny tool that will download the actual installer for you, based on the type of license you want. Choose the temporary folder for these files—I just put them in my Downloads folder—and click the Download button.

    Once it's finished, it'll automatically launch the Macrium installation wizard, which you can click right on through—the default options should be fine for our purposes. You can safely delete all the installer files from your Downloads folder one the wizard has finished.

  • Start Cloning Process

    Open Macrium Reflect and you'll see a detailed list of the disks connected to your computer. You have two main options: You can directly clone one disk to another, or create an image of a disk. Cloning allows you to boot from the second disk, which is great for migrating from one drive to another. Imaging, on the other hand, allows you to store as many full, 1-to-1 copies of your source disk as the destination's space will allow, which is useful for backups.

    Select the disk you want to copy (making sure to check the leftmost box if your disk has multiple partitions) and click "Clone This Disk" or "Image This Disk."

  • Choose Clone Destination

    In the next window, choose your destination disk—the one that will house your newly copied data. Note that this will erase all data on the disk, so be careful which one you choose. If there's any old data on it, you may want to select it and click the "Delete Existing Partitions" button until the drive is empty.
  • Schedule Your Clone

    The next page will ask you if you want to schedule this clone, which is useful if you want to regularly image your drive for backup purposes. I've skipped this, since I'm just doing a one-time clone. On the page after that, you can also save the backup and its schedule as an XML file for safe keeping, but I've unchecked that option for the same reason—I'm only doing this once for now.
  • Boot From Your Cloned Drive

    Finally, Macrium Reflect will begin the cloning process. This can take some time depending on the size of your drive, so give it time to do its thing. If you cloned your drive, you should be able to boot from it now by selecting it in your BIOS. If you're imaging your drive, you can keep the second drive connected for future image backups if need be.
  • Mac Users: Clone Your Drive with SuperDuper

    If you're on a Mac, we recommend SuperDuper for all your cloning needs. It's free, it's been around for years, and it's dead simple to use. Download the app, open the DMG file, and double-click on its icon to install it. (Don't drag it to your /Applications folder like you would most Mac apps; double-clicking on it should install it to your computer.)

    Once installed, open SuperDuper and you'll be greeted with its incredibly simple, intuitive interface. In the first menu next to "Copy," select the source disk you want to clone. In the second menu, select the destination disk you're cloning to—this will fully erase the drive in that second menu, so make sure there isn't anything important on it! When you're ready, click the "Copy Now" button. The process will begin. (Yeah, it's that easy.)

  • Finalize Your Drive Clone

    This may take a while, but when it's done, you have two choices. If you want to replace your Mac's internal drive with the new drive (say, if you're migrating to a larger drive), you can open up your Mac and swap those now—then boot up as normal.

    If you want to boot your cloned drive from USB, you can hold the Option key as your Mac starts up and select it from the boot list. Your cloned drive will be in the exact state your computer was during the cloning process, and you can continue working without skipping a beat.

Author:
Last update: 2019-05-05 11:05


How to Move WordPress to a New Host or Server With No Downtime

Move WordPress to new host

Step 1: Choose Your New WordPress Host

If you are stuck with a slow web host even after optimizing WordPress speedand performance, then it’s time to move your WordPress site to a new host that can handle your growing traffic.

When looking for a new WordPress hosting provider, it’s important to choose carefully, so you don’t have to move again any time soon.

Here’s who we recommend:

  • For reliable shared hosting, we recommend going with Bluehost. They’re officially recommended by WordPress.org. And with our Bluehost coupon, WPBeginner users get 60% off and a free domain name.
  • If you’re looking for cloud hosting or location-specific providers, then we recommend you check out Siteground. They have data centers across 3 different continents.
  • If you’re looking for dedicated servers, then we recommend you check out InMotion Hosting. Their commercial class servers and support are amazing.
  • If you’re looking for managed WordPress hosting, then we recommend you check out WP Engine. They are the best and most well-known provider in the business.

After buying your new hosting, do NOT install WordPress. We’ll do that in a later step. For now, your new web host account should be completely empty, with no files or folders in your main directory.

Step 2: Set Up Duplicator for Easy Migration

The first thing you need to do is install and activate the free Duplicator plugin on the website that you want to move. For more details, see our step by step guide on how to install a WordPress plugin.

Duplicator is a free plugin that we highly recommend. You can also use it to move your website to a new domain name without losing SEO.

However, in this article we will walk you through how to use it to migrate your WordPress site to a new hosting provider with zero downtime.

Once you have installed and activated Duplicator, go to the Duplicator » Packages page in your WordPress admin area.

Next, you need to click the ‘Create New’ button in the top right corner.

Creating a new package in Duplicator

After that, click the Next button and follow the steps to create your package.

Creating a new package in Duplicator

Make sure that your scan results check out (everything should say “Good”), and then click the Build button.

Build package

The process may take several minutes to complete, so leave the tab open as it works.

Once the process is complete, you will see download options for Installer and the Archive package. You need to click on the ‘One click download’ link to download both files.

Download package files

The archive file is a copy of your site, and the installer file will automate the installation process for you.

Step 3: Import Your WordPress Site to Your New Host

Now that you have downloaded both the archive and installer files, the next step is to upload them to your new web host.

You can do this by connecting to your new web host using FTP. If you’ve never done this before, check out our beginner’s guide to uploading files via FTP to WordPress.

Normally, you would enter your website’s domain name as host when connecting your FTP client. However, since your domain name is still poiting to your old host, you’ll need to connect by entering your server’s IP address or server host name. You can find this information from your new hosting account’s cpanel dashboard.

Server IP or hostname

If you are unable to find this information, then ask support at your new web host and they will help you out.

Using your FTP client, upload both installer.php file and your archive .zip file to the root directory of your website. This is usually /username/public_html/folder. Again, if you are not sure, then ask your web hosting company.

Make sure that your root directory is completely empty. Some web hosting companies automatically install WordPress when you sign up. If you have WordPress installed in your root directory, then you need to delete WordPress first.

Now you need to upload both the archive zip file and installer.php file to your site’s root directory.

Step 4: Change The Hosts File to Prevent Downtime

Once you’ve uploaded both files to your new host, you need to access the installer.php file in a browser.

The file can be accessed using a URL like this:

http://www.example.com/installer.php

However, this URL will take you to your old web host, and you will get a 404 error. This is because your domain name is still pointing to your old web host.

Normally, folks will tell you to change your domain nameservers and point to your new host. However, that will result in your users seeing a broken website as you migrate it.

We’ll show you how you can access your new site temporarily on your computer, without affecting your old site.

This is done with a hosts file on your computer.

The hosts file can be used to map domain names to specific IP addresses. In this step, we will show you how to add an entry for your domain name in the hosts file so that it points to your new host, but only when using your computer.

Making these changes will allow you to access the files on your new host using your own domain name, while the rest of the world will still be accessing your site from the old host. This ensures 100% uptime.

The first thing you need to do is find the IP address of your new web hosting server. To find this, you need to log into your cPanel dashboard and click on expand stats link in the left-hand sidebar. Your server’s address will be listed as Shared IP Address.

On some web hosting companies you will find this information under ‘Account Information’ heading.

Finding your server's IP Address

In the next step, Windows users need to go to Programs » All Programs » Accessories, right click on Notepad and select Run as Administrator. A Windows UAC prompt will appear, and you need to click on Yes to launch Notepad with administrator privileges.

On the Notepad screen, go to File » Open and then go to C:\Windows\System32\drivers\etc. Select hosts file and open it.

Mac users will need to open the Terminal app and enter this command to edit hosts file:

sudo nano /private/etc/hosts

For both Windows and Mac users, at the bottom of the hosts file, you need to enter the IP address you copied and then enter your domain name. Like this:

192.168.1.22 www.example.com

Make sure that you replace the IP address with the one you copied from cPanel, and example.com with your own domain name. Save your changes, and you can now access your files on the new host using your domain name on your computer.

Important: Don’t forget to undo the changes you made to hosts file after you have finished the migration (step 6).

Step 5: Creating MySQL Database on Your New Host

Before we run the installer on the new host, first we need to create a MySQL database on your new hosting account. If you have already created a MySQL database then you can jump to the next step.

Creating a Database in cPanel

Go to your new hosting account’s cPanel dashboard, scroll down to Databases section and click on MySQL databases icon.

MySQL Databases in cPanel

You will see a field to create a new database. Enter a name for your database, and click “Create Database” button.

Creating new database

After creating MySQL database, scroll down to MySQL Users section. Now provide a username and password for your new user and click on the ‘Create a user’ button.

Create a MySQL user

Next, you need to add user to the database. This will give the username you just created, all the permissions to work on your database.

Scroll down to ‘Add User to a Database’ section. Select the database user you created from the dropdown menu next to user, then select database, and click on the add button.

Add user to database

Your database is now ready to be used with WordPress. Be sure to make note of the database username and password.

Step 6: Begin the Duplicator Migration Process

Now we’re ready to run the installer. Navigate to this address in your browser window, replacing example.com with your domain name:

http://www.example.com/installer.php

Duplicator installer initialized

The installer will run a few tests and will show you ‘Pass’ next to archive and validation tests. Check the terms and conditions checkbox and continue by clicking on the next button.

Next, you will be asked to enter your MySQL host, database name, username, and password. Host is typically localhost, after that you will enter the details of database you created in previous step.

Connect Database

You can click on the ‘Test Database’ button to make sure you entered correct information. If duplicator is able to connect, you will see a string starting with Pass. Otherwise, you will see the database connection error details.

Click on the next button to continue.

Duplicator will now import your WordPress database from the archive zip into your new database.

Next, it will ask you to update site URL or Path. Since you are not changing domain names, you DON’T need to change anything here.

Click on the next button to continue.

Duplicator will run the final steps and will show you the login button.

Duplicator wizard finished

You can now login to your WordPress site on the new host to make sure that everything is working as expected.

Step 7: Update Your Domain

At this point, you’ve created a complete copy of your WordPress database and files on your new hosting server. But your domain still points to your old web hosting account.

To update your domain, you need to switch your DNS nameservers. This ensures that your users are taken to the new location of your website when they type your domain into their browsers.

If you registered your domain with your hosting provider, then it’s best to transfer the domain to the new host. If you used a domain registrar like GodaddyNamecheap, etc, then you need to update your nameservers.

You will need the nameserver information from your new web host. This is usually a couple of URLs that look like this:

ns1.hostname.com
ns2.hostname.com

For the sake of this guide, we will be showing you how to change DNS nameservers with GoDaddy. Depending on your domain registrar or web host, the screenshots may not reflect the setup on your registrar or web host. However the basic concept is the same.

Just look for domain management area and then look for nameservers. If you need assistance with updating your nameservers, you can ask your web hosting company.

First you need to login to your Godaddy account and then click on Domains. After that click on the manage button next to the domain name you want to change.

Manage domain

Under ‘Additional Settings’ section, click on ‘Manage DNs’ to continue.

Manage DNS

Now you need to scroll down to the Name servers section and click on the change button.

Change name servers

First you will need to switch the nameserver type dropdown from ‘Default’ to ‘Custom’ and under Nameservers fill in the your new hosting provider’s information.

Updating nameserver

Don’t forget to click on the save button to store your changes.

You have successfully changed the nameservers. DNS changes can take 4 – 48 hours to propagate for all users.

Now since you have the same content on your old host and the new host, your users wouldn’t see any difference. Your WordPress migration will be seamless with absolutely no downtime.

To be on the safe side, you can wait to cancel your old hosting account until 7 days after your migration.

Frequently Asked Questions

Here are a few questions many of our users ask while moving their websites from hosting provider to another.

1. Can I sign up for new hosting account without registering a domain name?

Yes, you can sign up for a hosting account without registering a domain name. Domain name and hosting are two different services, and you don’t necessarily need to register a domain name when signing up for new host. For more details see our guide on the difference between domain name and web hosting.

Some hosting providers will ask you to select a domain name as the first step when purchasing hosting. They will also offer you to enter a domain name if you already got one.

2. Do I need to transfer my domain name to the new host?

No, you don’t need to transfer your domain name to the new host. However, transferring your domain name to your new hosting will make it easier to renew and manage under the same dashboard as your new hosting account.

For more on this topic, see our guide on domain names and how do they work.

3. How do I fix error establishing database connection error in Duplicator?

If you are seeing error connecting to database or database connection error in Duplicator, then the most likely reason for this is that you entered incorrect information for your database connection.

Make sure that your database name, MySQL username and password are correct. Some web hosting companies do not use localhost as the host for their MySQL servers. If this is the case, then you will need to ask your web host’s support staff to provide you correct information.

4. How do I check that my site is now loading from new host?

There are several online tools that allow you to see who is hosting a website. After you have transferred your website to the new host, you can use any of these tools, and they will show you the name of the web hosting company hosting your website.

If it hasn’t been long since you migrated your website and made changes to your domain name server (DNS), then chances are that your site may still load from your old host. Domain name changes can take up to 48 hours to fully propagate.

5. Do I need to delete any files or data from old host?

When switching hosting companies, we recommend that you keep your old website for at least a week. After that, you can delete files from your old web host. If you are cancelling your account, then your web hosting provider will delete all your data according to their policy.

6. How long should I keep my account active on the old host?

Once you have migrated your website to the new host, and if you don’t have any other websites hosted with your old web host, then you can cancel your old web hosting account.

However, in some cases, you may have already paid them for yearly hosting. You should check their refund policy to see if you are eligible for any refund upon cancellation.

7. Bonus: Free Site Migration by Your New Host

If you’re looking to switch your web hosting, but the steps above sound too complicated, then you can choose the following providers, and they will migrate your website for you.

SiteGroundInMotion Hosting, and WP Engine offer free website migration for WPBeginner users.

We hope that this step by step guide helped you move WordPress to your new host with no downtime whatsoever. If you come across any issues with your WordPress migration, then check out our guide on the most common WordPress errors and how to fix them.

 

Author: Angelo A Vitale
Last update: 2018-12-10 20:17


Cloud Devices

How to reset a My Cloud (single bay) device (Western Digital, Cloud Device)

here are two options to Reset a My Cloud device, both use the Reset button located on the back of the device (see illustration for exact location). Please see details and instructions below.



STOP Critical: The following process is Not Data Destructive and will not impact user data on the device.




My Cloud Reset Button Location

image


Option A: 4 Second Reset (Reset with Power On)

The 4 Second Reset will reset the following:

    • Admin Password (No password by default)
    • Network Mode (Default = DHCP)
imageNote: The 4 Second Reset will only reset the Admin Password. It will not reset the Admin User Name. In order to reset the Admin User Name perform Option B - the 40 Second Reset outlined below.



To execute the 4 Second Reset:

  • With the power on, using a paperclip or narrow tipped pen, press and hold the reset button for at least 4 seconds. The reset process will cause the device to reboot and may take up to 5 minutes to complete. Please wait until the Power LED is solid blue, indicating the device is ready to use.




Option B: 40 Second Reset (Reset with Power Off)

The 40 Second Reset, also known as System Only Restore, will reset the following:

  • Admin User Name (default = “admin”)
  • Admin Password (No password by default)
  • Device Name (default = “WDMyCloud”)
  • Remove all Users except Admin
  • All Share permissions (default = Public)
  • Automatic Firmware Update (default = off)
  • Network Mode (default = DHCP)
  • Remove all Alerts
  • mycloud.com account association (default = not configured)
  • Mobile app account association (default = not configured)
  • WD Sync association (default = not configured)
  • Backup jobs (default = not configured)
  • Safepoint jobs (default = not configured)



To execute the 40 Second Reset:

    1. Power down the device and remove the power cord from the device
    2. Using a paperclip or narrow tipped pen, press and hold the reset button
    3. While continuing to hold the reset button, reconnect the power cord to the device and continue to hold the reset button for at least 40 seconds
    4. After releasing the reset button the device will reboot
imageNote: This process may take upwards of 15 minutes. Please wait until the Power LED is solid blue, indicating the device is ready to use.

Author: Angelo A Vitale
Last update: 2018-12-15 11:02


How to set up a My Cloud (single bay) device using the Dashboard

Answer ID 14001

Please see the My Cloud Video Tutorialsfor a quick overview.

For more detailed information please see Answer ID 19619 My Cloud Online User Guide and Solutions



Please follow the steps below to set up a My Cloud device for the first time or after the device has recently been reset to factory settings. If this is not the first time setting up the My Cloud device, please perform a 40 second reset prior to following the instructions below. For assistance resetting the device to its factory settings, please see Answer ID 10432: How to reset a My Cloud device.

image




imageNote: Screen Shots may vary depending on the version of the My Cloud device.




  1. Power on and connect the My Cloud device:
    1. Connect the device. For detailed instructions on powering up the device, please see the User Manual
    2. Connect the device to a network router
    3. Wait for the Power LED to be Solid Blue
    image
  2. Using a computer that is connected to the same network as the My Cloud device, launch a web browser
  3. Enter one of the following URL addresses into the URL field of the web browser:
    • Windows: http://wdmycloud
    • macOS: http://wdmycloud.local
    • Please see Answer ID 10420: How to access the Dashboard on a My Cloud for more information regarding acccessing the dashboard on a My Cloud deviceimage

      • The device setup page will appear

        STOP Critical: If the Dashboard login page is displayed or the device setup page does not appear, please perform a 40 second reset. For assistance resetting the device to its factory settings, please see Answer ID 10432: How to reset a My Cloud device.



        1. Select a language from the “Choose your language” dropdown
        2. Click the WD End User License Agreement (EULA) link to read the EULA. To agree with the EULA, check the box next to the link
        3. Click Continue
        image
      • Set an administrator password or leave the fields blank and click Nextimage
      • Enter the requested information:
          1. First Name
          2. Last Name
          3. Email Address

        imageNote: The email address provided here will be the username for MyCloud.com. For more information about MyCloud.com and accessing the device from outside the local network, please see Answer ID 13105: How to access and share files using MyCloud.com.


        image
      • Click Saveimage
      • Click Nextimage
      • We highly reccomend keeping the options below and registering the product. Click Nextimage
      • Click Finish to complete the setup processimage
      • The My Cloud device Dashboard will be displayedimage

Author: Angelo A Vitale
Last update: 2018-12-15 11:03


Error: User doesn't belong to SSLVPN service group when connecting to SSL-VPN

Error: User doesn't belong to SSLVPN service group when connecting to SSL-VPN

Last Updated: 10/3/2017 5246 Views 1107 Users found this article helpful

Description



When connecting to UTM SSL-VPN, either using the NetExtender client or a browser, users get the following error, User doesn't belong to SSLVPN service group
image
This error is because the user attempting the connection, or the group the user belong to, does not belong to the SSLVPN Services group. This KB article describes how to add a user and a user group to the SSLVPN Services group.




Resolution

  1. Login to the SonicWall management interface
  2. Navigate to the Users | Local Users page
  3. Click the Configure button under the user to edit the user
  4. Click on the Groups tab
  5. Scroll down and select SSLVPN Services under User Groups
  6. Click on the right arrow to add the user to the Member Of box
  7. Click on OK.
    image

To add a user group to the SSLVPN Services group

  1. Navigate to the Users | Local Groups page
  2. Click the Configure button under the SSLVPN Services group to edit the group
  3. Click on the Members tab
  4. Scroll down and select the user group under Group Memberships 
  5. Click on the right arrow to add the user to the Member Users and Groups box
  6. Click on OK.
    image

Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

  1. Login to the SonicWall management interface
  2. Navigate to the Manage tab
  3. Go to Users | Local Users & Groups page
  4. Click on the Local Users tab
  5. Click the Configure button next to the user to edit it
  6. Click on the Groups tab
  7. Scroll down and select SSLVPN Services under User Groups
  8. Click on the right arrow to add the user to the Member Of box
  9. Click on OK.
    image

To add a user group to the SSLVPN Services group

  1. Navigate to the Manage tab
  2. Go to Users | Local Users & Groups page
  3. Click on the Local Groups tab
  4. Click the Configure button under the SSLVPN Services group to edit the group
  5. Click on the Members tab
  6. Scroll down and select the user group under Group Memberships 
  7. Click on the right arrow to add the user to the Member Users and Groups box
  8. Click on OK.
    image
    https://www.sonicwall.com/en-us/support/knowledge-base/170505426969912

Author: Angelo A Vitale
Last update: 2018-12-11 00:06


How to Configure a Site to Site VPN Policy using Main Mode

Description

This article details how to configure a Site-to-Site VPN using Main Mode, which requires the SonicWall and the Remote VPN Concentrator to both have Static, Public IP Addresses.

Resolution


Step 1: Creating Address Objects for VPN subnets:

1. Login to the SonicWall Management Interface

2. Navigate to Network | Address Objects, scroll down to the bottom of the page and click on Addbutton.

On the NSA 2400

image

On the NSA 240

image

3. Configure the Address Objects as mentioned in the figure above, click Add and click Close when finished.



Step 2: Configuring a VPN policy on Site A SonicWall

1. Navigate to VPN | Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 240's WAN IP address).

TIP: If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in theIPsec Secondary Gateway Name or Addressfield.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

image

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)
  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 240 Site (Site B network)



NOTE: DHCP over VPN is not supported with IKEv2.



image
4. Click the Proposals Tab.

  • Under IKE (Phase 1) Proposal, select Main Mode from the Exchange menu. Aggressive Mode is generally used when WAN addressing is dynamically assigned. IKEv2 causes all the negotiation to happen via IKE v2 protocols, rather than using IKE Phase 1 and Phase 2. If you use IKE v2, both ends of the VPN tunnel must use IKE v2.
  • Under IKE (Phase 1) Proposal, the default values for DH Group, Encryption, Authentication, and Life Time are acceptable for most VPN configurations. Be sure the Phase 1 values on the opposite side of the tunnel are configured to match. You can also choose AES-128, AES-192, or AES-256 from the Authentication menu instead of 3DES for enhanced authentication security.

NOTE: The Windows 2000 L2TP client and Windows XP L2TP client can only work with DH Group 2. They are incompatible with DH Groups 1 and 5.


  • Under IPsec (Phase 2) Proposal, the default values for Protocol, Encryption, Authentication, Enable Perfect Forward Secrecy, DH Group, and Lifetime are acceptable for most VPN SA configurations. Be sure the Phase 2 values on the opposite side of the tunnel are configured to match.

image

5. Click the Advanced Tab.

  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keepalives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings.

image



Step 3: Configuring a VPN policy on Site B SonicWall 

1. Login to the Site B SonicWall appliance and navigate to VPN | Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General Tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 2400's WAN IP address).
  • If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in the IPsec Secondary Gateway Name or Addressfield.

NOTE: Secondary gateways are not supported with IKEv2.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

image

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)

NOTE: DHCP over VPN is not supported with IKEv2.

  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 2400 Site (Site A network)
    image

4. Click the Proposals Tab.

NOTE: Settings must be same as Site A.

image

5. Click the Advanced Tab.

  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keepalives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • - Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings.

image



Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.


Step 1: Creating Address Objects for VPN subnets:

1. Login to the SonicWall Management Interface

2. Click Manage in the top navigation menu

3. Navigate to Objects | Address Objects, scroll down to the bottom of the page and click on Addbutton.

On the NSA 2650

image

On the NSA 4600

image

4. Configure the Address Objects as mentioned in the figure above, click Add and click Close when finished.



Step 2: Configuring a VPN policy on Site A SonicWall

1. Click Manage in the top navigation menu.

2. Navigate to VPN | Base Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 240's WAN IP address).

TIP: If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in the IPsec Secondary Gateway Name or Address field.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

image

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)
  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 240 Site (Site B network)



NOTE: DHCP over VPN is not supported with IKEv2.



image
4. Click the Proposals Tab.

  • Under IKE (Phase 1) Proposal, select Main Mode from the Exchange menu. Aggressive Mode is generally used when WAN addressing is dynamically assigned. IKEv2 causes all the negotiation to happen via IKE v2 protocols, rather than using IKE Phase 1 and Phase 2. If you use IKE v2, both ends of the VPN tunnel must use IKE v2.
  • Under IKE (Phase 1) Proposal, the default values for DH Group, Encryption, Authentication, and Life Time are acceptable for most VPN configurations. Be sure the Phase 1 values on the opposite side of the tunnel are configured to match. You can also choose AES-128, AES-192, or AES-256 from the Authentication menu instead of 3DES for enhanced authentication security.

NOTE: The Windows 2000 L2TP client and Windows XP L2TP client can only work with DH Group 2. They are incompatible with DH Groups 1 and 5.


  • Under IPsec (Phase 2) Proposal, the default values for Protocol, Encryption, Authentication, Enable Perfect Forward Secrecy, DH Group, and Lifetime are acceptable for most VPN SA configurations. Be sure the Phase 2 values on the opposite side of the tunnel are configured to match.

image

5. Click the Advanced Tab.

  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keepalives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings.

image



Step 3: Configuring a VPN policy on Site B SonicWall 

1. Login to the Site B SonicWall appliance and Click Manage in the top navigation menu. Click VPN | Base Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General Tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 4600's WAN IP address).
  • If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in the IPsec Secondary Gateway Name or Addressfield.

NOTE: Secondary gateways are not supported with IKEv2.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

image

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)

NOTE: DHCP over VPN is not supported with IKEv2.

  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 4600 Site (Site A network)
    image

4. Click the Proposals Tab.

NOTE: Settings must be same as Site A.

image

5. Click the Advanced Tab.

  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keepalives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • - Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings.

image

Author: Angelo A Vitale
Last update: 2018-12-11 00:12


How To configure a Site to Site VPN tunnel between a SonicWall and Linksys VPN Router

Description

This article covers how to configure a site to site VPN tunnel between a SonicWall and Linksys VPN router in aggressive mode.

Resolution

Procedure:

SonicWall Configuration

First, on the SonicWall, you must create an address object for the remote network.

1) Log into the SonicWall.
2) Browse to Network, then Address Objects
3) Create a new Address Object for the network on the LinkSys VPN router end you wish to reach (LinkSys LAN).



image



Next, on the SonicWall you must create an SA.

1) Browse to VPN, then Settings (default view for VPN).
2) Ensure that "Enable VPN" is selected.
3) Click Add.
4) Change the Authentication Method to "IKE using pre-shared secret".
5) Name the SA, in this example "Tunnel to LinkSys VPN Router".
6) Enter the WAN IP of the LinkSys VPN router for "IPSec Primary Gateway Name or Address:".
7) Enter your shared secret, in this example "P@ss20140603"
8) Define Local IKE ID & Peer IKE ID. In this example the Local IKE ID is "Yahoo.com" and the Peer IKE ID is "Google.com"



image



1) Select the "Network" tab.
2) Select "Lan Subnets" for Local Networks from the drop down box
3) Select the address object previously created for the destination network.



image



1) Select the "Proposals" tab.
2) Configure DH group under IKE Phase 1 to "Group 1".
3) Configure Phase 1 Encryption "3DES" & authentication "SHA1".
4) Configure Phase 2 Encryption "3DES" & authentication "SHA1".
5) Enable Perfect Forward Secrecy. And Select the DH Group as "Group1"
6) Configure Phase 1 & Phase 2 Life Time "28800"



image



1) Select "Advanced" tab.
2) Ensure that keep alive is enabled on only one end of the tunnel, it would be mostly on the device which is running on the DHCP WAN IP. In this example it is the LinkSys VPN Router.
3) Select "Enable Windows Networking (NetBIOS) Broadcast" if you would like to pass NetBIOS across the VPN.



image



LinkSys VPN Router Configuration

Go to VPN Gateway to Gateway 
>> Edit the tunnel

  1. Define the Tunnel/Gateway.
  2. Select interface WAN1
  3. Check the "Enable" option.

image



>> Local Group Setup

  1. Select the "Local Security Gateway Type" as " IP + Domain name (FQDN) Authentication"
  2. Choose a domain name. In this example it is "Google.com".
  3. Choose "Local Security Group Type" as "Subnet"
  4. Mention the IP address and subnet mask of the local network which are behind the Linksys VPN Router


>>Remote Group Setup

  1. Select the "Remote Security Gateway Type" as " IP + Domain name (FQDN) Authentication"
  2. Mention the IP address of the remote firewall. In this case it is the IP of the SonicWall Firewall.
  3. Choose a domain name. In this example it is "Yahoo.com"
  4. Choose "Remote Security Group Type" as "Subnet"
  5. Mention the IP address of the network which are behind the SonicWall or the network which you want to access behind the SonicWall

image



>IPSec Setup

  1. Select Keying mode as "IKE with Preshared key"
  2. Select Phase 1 DH Group as "Group1"
  3. Select Phase 1 encryption as "3DES"
  4. Select Phase 1 Authentication as "SHA1"
  5. Mention the Phase 1 SA lifetime as "28800"
  6. Enabled Perfect Forward Secrecy
  7. Select Phase 2 DH Group as "Group1"
  8. Select Phase 2 encryption as "3DES"
  9. Select Phase 2 Authentication as "SHA1"
  10. Mention the Phase 2 SA lifetime as "28800"
  11. Mentioned the Pre-shared key. This key should be same on both the devices, SonicWall as well as LinkSys VPN router.

image



>>Click on "Advanced"

  1. Enable the Aggressive Mode
  2. Enable Keep Alives
  3. Enable NetBios (If needed)
  4. Enable Dead Peer Detection (If needed)

image






Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Procedure:


SonicWall Configuration

First, on the SonicWall, you must create an address object for the remote network.

1) Log into the SonicWall.
2) Browse to Manage > Policies > Objects > Address Objects
3) Create a new Address Object for the network on the LinkSys VPN router end you wish to reach (LinkSys LAN).



image

Next, on the SonicWall you must create an SA.

1) Browse to VPN, then Settings (default view for VPN).
2) Ensure that "Enable VPN" is selected.
3) Click Add.
4) Change the Authentication Method to "IKE using pre-shared secret".
5) Name the SA, in this example "Tunnel to LinkSys VPN Router".
6) Enter the WAN IP of the LinkSys VPN router for "IPSec Primary Gateway Name or Address:".
7) Enter your shared secret, in this example "P@ss20140603"
8) Define Local IKE ID & Peer IKE ID. In this example the Local IKE ID is "Yahoo.com" and the Peer IKE ID is "Google.com"



image

1) Select the "Network" tab.
2) Select "Lan Subnets" for Local Networks from the drop down box
3) Select the address object previously created for the destination network.



image

1) Select the "Proposals" tab.
2) Configure DH group under IKE Phase 1 to "Group 1".
3) Configure Phase 1 Encryption "3DES" & authentication "SHA1".
4) Configure Phase 2 Encryption "3DES" & authentication "SHA1".
5) Enable Perfect Forward Secrecy. And Select the DH Group as "Group1"
6) Configure Phase 1 & Phase 2 Life Time "28800"



image

1) Select "Advanced" tab.
2) Ensure that keep alive is enabled on only one end of the tunnel, it would be mostly on the device which is running on the DHCP WAN IP. In this example it is the LinkSys VPN Router.
3) Select "Enable Windows Networking (NetBIOS) Broadcast" if you would like to pass NetBIOS across the VPN.



image

LinkSys VPN Router Configuration

Go to VPN Gateway to Gateway 
>> Edit the tunnel

  1. Define the Tunnel/Gateway.
  2. Select interface WAN1
  3. Check the "Enable" option.

image



>> Local Group Setup

  1. Select the "Local Security Gateway Type" as " IP + Domain name (FQDN) Authentication"
  2. Choose a domain name. In this example it is "Google.com".
  3. Choose "Local Security Group Type" as "Subnet"
  4. Mention the IP address and subnet mask of the local network which are behind the Linksys VPN Router


>>Remote Group Setup

  1. Select the "Remote Security Gateway Type" as " IP + Domain name (FQDN) Authentication"
  2. Mention the IP address of the remote firewall. In this case it is the IP of the SonicWall Firewall.
  3. Choose a domain name. In this example it is "Yahoo.com"
  4. Choose "Remote Security Group Type" as "Subnet"
  5. Mention the IP address of the network which are behind the SonicWall or the network which you want to access behind the SonicWall



image



>>IPSec Setup

  1. Select Keying mode as "IKE with Preshared key"
  2. Select Phase 1 DH Group as "Group1"
  3. Select Phase 1 encryption as "3DES"
  4. Select Phase 1 Authentication as "SHA1"
  5. Mention the Phase 1 SA lifetime as "28800"
  6. Enabled Perfect Forward Secrecy
  7. Select Phase 2 DH Group as "Group1"
  8. Select Phase 2 encryption as "3DES"
  9. Select Phase 2 Authentication as "SHA1"
  10. Mention the Phase 2 SA lifetime as "28800"
  11. Mentioned the Pre-shared key. This key should be same on both the devices, Sonicwall as well as LinkSys VPN router.



image



>>Click on "Advanced"

  1. Enable the Aggressive Mode
  2. Enable Keep Alives
  3. Enable NetBios (If needed)
  4. Enable Dead Peer Detection (If needed)



image

Author: Angelo A Vitale
Last update: 2018-12-11 00:15


How to configure SSL VPN/NetExtender For Clients With Overlapping Subnet

How to configure SSL VPN/NetExtender For Clients With Overlapping Subnet

Description

SSL VPN or Netextender enables us to access the corporate SonicWall LAN subnets over the Internet with secure VPN tunnel. Sometimes the SonicWall LAN subnet and the client's IP on which the NetExtender is installed overlap and in such scenario accessing SonicWall LAN resources is not possible.

Cause

IP subnet overlap between SonicWall LAN and client computer IP scheme.

Resolution

This article explains one of the ways to get over this problem. The solution includes configuring a virtual or dummy subnet with same subnet mask as that of SonicWall LAN subnet, which would do one to one mapping (NATing) of virtual IP addresses to the SonicWall LAN IP address.

Note: 


Let's consider the following IP scheme for the purpose of article.

1. SonicWall LAN subnet 192.168.1.0 mask 255.255.255.0

2. LAN subnet of the computer where Netextender/Mobile connect is installed 192.168.1.0 mask 255.255.255.0

3. SSLVPN IP Pool used for NetExtednder virtual adapter 10.1.1.0 mask 255.255.255.0

4. Virtual or dummy subnet used to send traffic on 10.10.10.0 mask 255.255.255.0

Please refer the article https://support.SonicWall.com/kb/sw10657 for SSL-VPN configuration.

The configuration steps are as follows.

Step 1. Creating address object for SSLVPN IP pool.

Step 2. Specify the address object in SSLVPN client setting.

Step 3. Create virtual/dummy subnet address object with zone LAN.

Step 4. Specify Virtual LAN Subnet address object in the SSL VPN Client routes.

Step 5. Add the Virtual LAN Subnet address object in VPN access of SSLVPN Services Local group.

Step 6. Creating NAT policy.

Step 7.Creating an Access rule.

Step 1. Creating address object for SSL VPN IP pool.

The IP range used for SSLVPN IP Pool should not conflict with IP scheme present on either SonicWall or client side. The subnet used here is 10.1.1.0/24.

Login to the SonicWall UTM appliance,
1) Go to Network -> Address Object, Click on Custom Address object radio button at the top.
2) Click on Add button under Address Object, to get Add address object Window. Create address object for SSLVPN lease Range.

  • Name: SSLVPN IP Pool (Any Friendly Name as you wish but need to select that while configuring SSLVPN )
  • Zone: SSLVPN
  • Type: Network
  • Network: 10.1.1.0
  • Netmask/Prefix Length: 255.255.255.0

image



Step 2. Specify the address object in SSLVPN client setting as follows

1) Navigate to SSL VPN > Client setting >Click configure.

image

2) Specify the address object in the Network Address IPv4 option on the Setting tab.

image

Step 3. Create Virtual LAN Subnet address object with zone being LAN.

image

Step 4. Specify Virtual LAN Subnet address object in the SSL VPN Client routes.

image

Step 5. Add the Virtual LAN Subnet address object in VPN access of SSLVPN Services Local group.

Navigate to Users>Local groups> SSLVPN services and the address object in the VPN access of this group.

image

In order for the client computer to have route and access to the virtual subnet this step is essential.

image

Step 6. Creating a NAT policy.

Go to Network> NAT Policies >Select the "Custom" radio button and click on "Add"

This Nat policy allows the translation of the virtual/dummy network to the actual SonicWall LAN network.

image

Step 7.Creating an Access rule.

Navigate to Firewall > Access Rules.

image

Go to SSLVPN to LAN page and create the following access rule.

image

How to test:

When the NetExtender/ Mobile Connect users with overlapping network will try to access the SonicWall LAN they must use an IP address from the virtual/dummy IP subnet. For example Client computer with NetEXtender IP- 10.1.1.1 trying to access a server using virtual IP 10.10.10.65. This traffic when reaches SonicWall device it translates the destination IP from 10.10.10.65 to 192.168.1.65(actual LAN IP) and access rule allows traffic from SSLVPN to LAN zone.


Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Note: 


Let's consider the following IP scheme for the purpose of article.

1. SonicWall LAN subnet 192.168.1.0 mask 255.255.255.0

2. LAN subnet of the computer where Netextender/Mobile connect is installed 192.168.1.0 mask 255.255.255.0

3. SSLVPN IP Pool used for NetExtednder virtual adapter 10.1.1.0 mask 255.255.255.0

4. Virtual or dummy subnet used to send traffic on 10.10.10.0 mask 255.255.255.0

Please refer the article https://support.SonicWall.com/kb/sw10657 for SSL-VPN configuration.

The configuration steps are as follows.

Step 1. Creating address object for SSLVPN IP pool.

Step 2. Specify the address object in SSLVPN client setting.

Step 3. Create virtual/dummy subnet address object with zone LAN.

Step 4. Specify Virtual LAN Subnet address object in the SSL VPN Client routes.

Step 5. Add the Virtual LAN Subnet address object in VPN access of SSLVPN Services Local group.

Step 6. Creating NAT policy.

Step 7.Creating an Access rule.

Step 1. Creating address object for SSL VPN IP pool.

The IP range used for SSLVPN IP Pool should not conflict with IP scheme present on either SonicWall or client side. The subnet used here is 10.1.1.0/24.

Login to the SonicWall UTM appliance,
1) Go to Manage > Objects > Address Objects. Click on Add to create an Address Object for SSL VPN IP Pool.

  • Name: SSLVPN Ip Pool (Any Friendly Name as you wish but need to select that while configuring SSLVPN )
  • Zone: SSLVPN
  • Type: Network
  • Network: 10.1.1.0
  • Netmask/Prefix Length: 255.255.255.0

image

Step 2. Specify the address object in SSLVPN client setting as follows

1) Navigate to Manage > Connectivity > SSL VPN > Client setting > Click configure.

image

2) Specify the address object in the Network Address IPv4 option on the Setting tab.

image

Step 3. Create Virtual LAN Subnet address object with zone being LAN.

image

Step 4. Specify Virtual LAN Subnet address object in the SSL VPN Client routes.

image

Step 5. Add the Virtual LAN Subnet address object in VPN access of SSLVPN Services Local group.

Navigate to Manage > Users> Local groups> SSLVPN services and the address object in the VPN access of this group.



image

In order for the client computer to have route and access to the virtual subnet this step is essential.

image

Step 6. Creating a NAT policy.

Go to Manage > Policies > Rules > NAT Policies. Click on Add to create a new custom policy.

This Nat policy allows the translation of the virtual/dummy network to the actual SonicWall LAN network.

image

Step 7.Creating an Access rule.

Navigate to Manage > Policies > Rules > Access Rules. 

Go to SSLVPN to LAN page and create the following access rule.

SSLVPN> LAN:

Source: SSLVPN IP Pool

Destination: Virtual LAN Subnet

Service: Any

Action : Allow

How to test:

When the NetExtender/ Mobile Connect users with overlapping network will try to access the SonicWall LAN they must use an IP address from the virtual/dummy IP subnet. For example Client computer with NetEXtender IP- 10.1.1.1 trying to access a server using virtual IP 10.10.10.65. This traffic when reaches SonicWall device it translates the destination IP from 10.10.10.65 to 192.168.1.65(actual LAN IP) and access rule allows traffic from SSLVPN to LAN zone.

https://www.sonicwall.com/en-us/support/knowledge-base/170504796310067

https://community.spiceworks.com/topic/1968371-sslvpn-error-on-mac-os-x-ip-address-in-pool-is-not-configured

Author: Angelo A Vitale
Last update: 2018-12-11 00:38


How to Configure the SSL-VPN Feature for use with NetExtender or Mobile Connect

How to Configure the SSL-VPN Feature for use with NetExtender or Mobile Connect

Description

SSL VPN is one method of allowing Remote Users to connect to the SonicWall and access internal network resources. SSL VPN Connections can be setup with one of three methods:

  • The SonicWall NetExtender Client
  • The SonicWall Mobile Connect Client
  • SSL VPN Bookmarks via the SonicWall Virtual Office

This article details how to setup the SSL VPN Feature for NetExtender and Mobile Connect Users, both of which are software based solutions. If you would like information on configuring Virtual Office please reference Configuring Virtual Office.

NetExtender is available for the following Operating Systems:

  • Microsoft Windows
  • Android
  • iOS
  • OS X
  • Linux Distributions

Mobile Connect is available for the following Operating Systems:

  • Windows 8.1 & 10
  • OS X
  • iOS
  • Android

Resolution

Creating an Address Object for the SSLVPN IPv4 Address Range

1. Login to the SonicWall Management GUI.
2. Navigate to Network | Address Objects and click Add... at the bottom of the page.
3. In the pop-up window, enter the information for your SSL VPN Range. An example Range is included below:

  • Name: SSL VPN Range TIP: This is only a Friendly Name used for Administration.
  • Zone: SSLVPN
  • Type : Range
  • Starting IP Address: 192.168.168.100
  • Ending IP Address: 192.168.168.110

image
SSLVPN Configuration

1. Navigate to the SSL-VPN | Server Settings page.

2. Click on the Red Bubble for WAN, it should become Green. This indicates that SSL VPN Connections will be allowed on the WAN Zone.

3. Set the Cipher MethodSSL VPN Port, and Domain as desired. Cipher Method indicates the strength of the Public Key Infrastructure used for the VPN Connection and the SSL VPN Port as well as the Domain are used for User Login.

image

NOTE: From 6.2.x Firmware , the Cipher options will be removed from the Server Settings Tab.

image

The SSL VPN | Client Settings page allows the administrator to configure the client address range information and NetExtender client settings, the most important being where the SSL-VPN will terminate (e.g. on the LAN in this case) and which IPs will be given to connecting clients. Finally, select from where users should be able to login:

CAUTION: NetExtender cannot be terminated on an Interface that is paired to another Interface using Layer 2 Bridge Mode. This includes Interfaces bridged with a WLAN Interface. Interfaces that are configured with Layer 2 Bridge Mode are not listed in the "SSLVPN Client Address Range" Interface drop-down menu. For NetExtender termination, an Interface should be configured as a LAN, DMZ, WLAN, or a custom Trusted, Public, or Wireless zone, and also configured with the IP Assignment of "Static".

4. Click on the Configure button for the Default Device Profile as shown below.

image

NOTE: From 6.2.x Firmware , the Default Device Profile option will be added under Client settings tab.

image

5. Set the Zone IP V4 as SSLVPN. Set Network Address IP V4 as the Address Object you created earlier (SSLVPN Range).

image
The Client Routes tab allows the Administrator to control what network access SSL VPN Users are allowed. The NetExtender client routes are passed to all NetExtender clients and are used to govern which networks and resources remote users can access via the SSL VPN connection.

CAUTION: All SSL VPN Users can see these routes but without appropriate VPN Access on their User or Group they will not be able to access everything shown in the routes. Please make sure to set VPN Access appropriately.

image
The Client Settings tab allows the Administrator to input DNS, WINS, and Suffix information while also controlling the caching of passwords, user names, and the behavior of the NetExtender Client.

6. Input the necessary DNS/WINS information and a DNS Suffix if SSL VPN Users need to find Domain resources by name.

7. Enable Create Client Connection Profile - The NetExtender client will create a connection profile recording the SSL VPN Server name, the Domain name and optionally the username and password.

image

Adding Users to SSLVPN Services Group

NetExtender Users may either authenticate as a Local User on the SonicWall or as a member of an appropriate Group through LDAP. This article will cover setting up Local Users, however if you're interested in using LDAP please reference How to Configure LDAP Authentication for SSL-VPN Users.

1. Navigate to Users | Local Users and add a new User if necessary by using the Add User... button.

2. On the Members tab add SSL-VPN Services to the Member Users and Groupsfield.

3. On the VPN Access tab add the relevant Subnets, Range, or IP Address Address Objects that match what the User needs access to via NetExtender.

CAUTION: SSL VPN Users will only be able to access resources that match both their VPN Access and Client Routes.

image

Checking Access rule Information for SSLVPN Zone

1. Navigate to Firewall | Access Rules and open the SSL VPN to LAN Access Rules as indicated on the image below.

2. You should see an auto-created Access Rule similar to the image below. If not, recreate the rule as shown below.

3. If SSL VPN Users need access to resources on other Zones, such as the DMZ or a Custom Zone, verify or add those Access Rules. If you're unsure how to create an Access Rule please reference How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.

image

image

Testing the Connection

1. Download and install either SonicWall NetExtender or SonicWall Mobile Connect. NetExtender is available via MySonicWall.com or the Virtual Office page on the SonicWall. SonicWall Mobile Connect is available via the App Store, Windows Store, or Apple Store depending on your Operating System.

2. If using NetExtender, input the following:

  • IP Address or URL of the SonicWall WAN Interface, followed by the Port Number
  • User Name
  • Password
  • Domain

3. If using Mobile Connect, input the following:

  • IP Address or URL of the SonicWall WAN Interface, followed by the Port Number
  • Domain

NOTE: Mobile Connect will prompt for User and Password after it's able to verify a connection to the SonicWall. This is slightly different than NetExtender.

4. The connection should establish and the User should be able to access the appropriate resources.

TIP: Ping is a great tool to test access to resources once the VPN Connection has established. If Pings are Timing Out it's advisable to perform a Packet Monitor on the SonicWall to determine what is happening to the traffic.


Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Creating an Address Object for the SSLVPN IPv4 Address Range

  1. Login to the SonicWall Management GUI.
  2. Click Manage in the top navigation menu
  3. Navigate to Objects | Address Objects and click +Add at the top of the pane.

image

3. In the pop-up window, enter the information for your SSL VPN Range. An example Range is included below:

  • Name: SSL VPN Range TIP: This is only a Friendly Name used for Administration.
  • Zone: SSLVPN
  • Type : RangeNOTE: This does not have to be a range and can be configured as a Host or Network as well
  • Starting IP Address: 192.168.168.100
  • Ending IP Address: 192.168.168.110

image
SSLVPN Configuration

1. Navigate to the SSL-VPN | Server Settings page.

2. Click on the Red Bubble for WAN, it should become Green. This indicates that SSL VPN Connections will be allowed on the WAN Zone.

3. Set the SSL VPN Port, and Domain as desired.

NOTE: The SSLVPN port will be needed when connecting using Mobile Connect and NetExtender unless the port number is 443. Port 443 can only be used if the management port of the firewall is not 443.

NOTE: The Domain is used during the user login process'

TIP: If you want to be able to manage the firewall via GUI or SSH over SSLVPN these features can be enabled separately here as well.

image

4. Navigate to the SSL VPN | Client Settings page.


The SSL VPN | Client Settings page allows the administrator to configure the client address range information and NetExtender client settings, the most important being where the SSL-VPN will terminate (e.g. on the LAN in this case) and which IPs will be given to connecting clients.

CAUTION: NetExtender cannot be terminated on an Interface that is paired to another Interface using Layer 2 Bridge Mode. This includes Interfaces bridged with a WLAN Interface. Interfaces that are configured with Layer 2 Bridge Mode are not listed in the "SSLVPN Client Address Range" Interface drop-down menu. For NetExtender termination, an Interface should be configured as a LAN, DMZ, WLAN, or a custom Trusted, Public, or Wireless zone, and also configured with the IP Assignment of "Static".

5. Click on the Configure button for the Default Device Profile.

image

5. Set the Zone IP V4 as SSLVPN. Set Network Address IP V4 as the Address Object you created earlier (SSLVPN Range).

image
6. The Client Routes tab allows the Administrator to control what network access SSL VPN Users are allowed. The NetExtender client routes are passed to all NetExtender clients and are used to govern which networks and resources remote users can access via the SSL VPN connection.

CAUTION: All SSL VPN Users can see these routes but without appropriate VPN Access on their User or Group they will not be able to access everything shown in the routes. Please make sure to set VPN Access appropriately.

image
7. The Client Settings tab allows the Administrator to input DNS, WINS, and Suffix information while also controlling the caching of passwords, user names, and the behavior of the NetExtender Client.

8. Input the necessary DNS/WINS information and a DNS Suffix if SSL VPN Users need to find Domain resources by name.

9. Enable Create Client Connection Profile - The NetExtender client will create a connection profile recording the SSL VPN Server name, the Domain name and optionally the username and password.image

image

Adding Users to SSLVPN Services Group

NetExtender Users may either authenticate as a Local User on the SonicWall or as a member of an appropriate Group through LDAP. This article will cover setting up Local Users, however if you're interested in using LDAP please reference How to Configure LDAP Authentication for SSL-VPN Users.

1. Navigate to Users | Local Users & Groups. Add a new User if necessary by using the + Add button.

image

2. On the Groups tab add SSLVPN Services to the Member Of: field.

image

3. On the VPN Access tab add the relevant Subnets, Range, or IP Address Address Objects that match what the User needs access to via NetExtender.

CAUTION: SSL VPN Users will only be able to access resources that match both their VPN Access and Client Routes.

image

4. Click OK to save these settings and close the window.

Checking Access rule Information for SSLVPN Zone

1. Navigate to Rules | Access Rules.

2. Access the SSLVPN to LAN rules via the Zone drop-down options or the highlighted matrix button below.

image

3. You will need to create Access Rules similar to the image below allowing SSLVPN IPs to access your intended end devices.

NOTE: This does not grant access to all users, individual access is still granted to users based on their VPN access and SSLVPN routes. Access rules are needed for the firewall to allow this traffic through.

image

4. If SSL VPN Users need access to resources on other Zones, such as the DMZ or a Custom Zone, verify or add those Access Rules. If you're unsure how to create an Access Rule please reference How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.

Testing the Connection

1. Download and install either SonicWall NetExtender or SonicWall Mobile Connect. NetExtender is available via MySonicWall.com or the Virtual Office page on the SonicWall. SonicWall Mobile Connect is available via the App Store, Windows Store, or Apple Store depending on your Operating System.

2. If using NetExtender, input the following:

3. If using Mobile Connect, input the following:

  • Connection NameTIP: This is a friendly name for your device
  • IP Address or URL of the SonicWall WAN Interface, followed by the Port Number

NOTE: Mobile Connect will prompt for User and Password after it's able to verify a connection to the SonicWall. This is slightly different than NetExtender. If you are logging on via desktop, you may need to reference your domain in the user field via the user@domain format

4. The connection should establish and the User should be able to access the appropriate resources.

TIP: Ping is a great tool to test access to resources once the VPN Connection has established. If Pings are Timing Out it's advisable to perform a Packet Monitor on the SonicWall to determine what is happening to the traffic. Keep in mind, pings to the SonicWall are considered management traffic and require specific access rules to allow this traffic.

https://www.sonicwall.com/en-us/support/knowledge-base/170505401898786

Author: Angelo A Vitale
Last update: 2018-12-11 00:44


How to setup SSL-VPN on SonicOS

This article provides information on how to configure the IPv6 SSL VPN features on the SonicWall security appliance. SonicWall's IPv6 SSL VPN features provide secure remote access to the IPv6 network using the NetExtender client.

NetExtender is an SSL VPN client for Windows, Mac, or Linux users that is downloaded transparently and that allows you to run any application securely on the company's IPv4/6 network. It uses Point-to-Point Protocol (PPP). NetExtender allows remote clients seamless access to resources on your local network. Users can access NetExtender two ways:


  • Logging in to the Virtual Office web portal provided by the SonicWall security appliance and clicking on the NetExtender button.
  • Launching the standalone NetExtender client.


The NetExtender standalone client is installed the first time you launch NetExtender. Thereafter, it can be accessed directly from the Start menu on Windows systems, from the Application folder or dock on MacOS systems, or by the path name or from the shortcut bar on Linux systems.




Resolution

Login to the SonicWall UTM appliance, go to SSL-VPN | Server Settings page allows the administrator to enable SSL VPN access on zones, from SonicOS Enhanced 6.2.x onwards the SSL-VPN feature on UTM devices uses port 4433.



NOTE:

  • In older firmware versions the SSL-VPN Zones settings are available under SSL-VPN | Client Settings page.
  • SSL-VPN can only be connected using interface IP addresses. By default SSL-VPN is enabled on the WAN zone and users can connect to it using the WAN interface IP address. Likewise for other zones and, if enabled, can only be connected using the interface IP address.
    image
  1. The SSL VPN | Portal Settings page is used to configure the appearance and functionality of the SSL VPN Virtual Office web portal. The Virtual Office portal is the website that uses log in to launch NetExtender.
    image
  2. Configure the SSL VPN | Client Settings.

    The SSL VPN | Client Settings page allows the administrator to configure the client address range information and NetExtender client settings.
    The most important being where the SSL-VPN will terminate (eg on the LAN in this case) and which IPs will be given to connecting clients. Finally, select from where users should be able to login (probably, this will be the WAN, so just click on the WAN entry):

    NOTE: NetExtender cannot be terminated on an interface that is paired to another interface using L2 Bridge Mode. This includes interfaces bridged with a WLAN interface. Interfaces that are configured with L2 Bridge Mode are not listed in the "SSLVPN Client Address Range" Interface drop-down menu. For NetExtender termination, an interface should be configured with as a LAN, DMZ, WLAN, or a custom Trusted, Public, or Wireless zone, and also configured with the IP Assignment of "Static".

    For SonicOS 6.2.x.x and above, first configure the traditional IPv4 IP address pool, and then configure an IPv6 IP Pool. Clients will be assigned two internal addresses: one IPv4 and one IPv6.
    image
    Configuring NetExtender Client Settings:
    Enable the option Create Client Connection Profile - The NetExtender client will create a connection profile recording the SSL VPN Server name, the Domain name and optionally the username and password.
  3. The SSL VPN | Client Routes page allows the administrator to control the network access allowed for SSL VPN users. The NetExtender client routes are passed to all NetExtender clients and are used to govern which private networks and resources remote user can access via the SSL VPN connection.
    NOTE: All clients can see these routes. Also, here you may enable/disable "Tunnel All Mode" (this is the equivalent of "This gateway only" option while configuring GroupVPN).

    image
  4. Under Users | Local users, ensure that the relevant user or user group is a member of the "SSLVPN Services" group:
    image
    To setup membership for local or LDAP user group, edit the SSLVPN Services user group and add the user group under the Members tab
    image
    VPN Access Tab:
    On the VPN Access Tab allows users to access networks using a VPN tunnel, select one or more networks from theNetworks list and click the arrow button | to move them to theAccess List. To remove the user's access to a network, select the network from the Access List, and click the left arrow button.
    image
  5. Under Firewall | Access Rules, note the new SSLVPN zone:
    image
  6. Firewall access rules are auto-created from and to SSLVPN zone from other zones. Optionally you could modify the auto-created SSLVPN to LAN rule to allow access only to those users that are configured (recommended to use single rule with groups rather than multiple rules with individual users). Ignore any warning that login needs to be enabled from SSLVPN zone.
  7. Go to WAN interface and ensure HTTPS user login is enabled:
    image


How to test this scenario:

  1. Users can now go to the public IP of the sonicwall. Notice the new "click here for SSL login" hyper link:
    image
  2. Users can then login and start netextender:

    NetExtender provides remote users with full access to your protected internal network. The experience is virtually identical to that of using a traditional IPSec VPN client, but NetExtender does not require any manual client installation. Instead, the NetExtender Windows client is automatically installed on a remote user's PC by an ActiveX control when using the Internet Explorer browser, or with the XPCOM plugin when using Firefox.
    On MacOS systems, supported browsers use Java controls to automatically install NetExtender from the Virtual Office portal. Linux systems can also install and use the NetExtender client.

    After installation, NetExtender automatically launches and connects a virtual adapter for secure SSL-VPN point-to-point access to permitted hosts and subnets on the internal network.
    image
    image

    Using the IPv6 address and the service port of the remote server to login.
    image
    Both IPv4 and IPv6 addresses should be distributed on the client.
    image
    And ping can be used to verify the connectivity and functionality as well.

Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Login to the Sonicwall Appliance , Click on MANAGE , navigate to SSL-VPN | Server Settings page (allows the administrator to enable SSL VPN access on zones, from SonicOS Enhanced 6.2.x onwards the SSL-VPN feature on UTM devices uses port 4433.)



NOTE:

  • In older firmware versions the SSL-VPN Zones settings are available under SSL-VPN | Client Settings page.
  • SSL-VPN can only be connected using interface IP addresses. By default SSL-VPN is enabled on the WAN zone and users can connect to it using the WAN interface IP address. Likewise for other zones and, if enabled, can only be connected using the interface IP address.

image



1. Configure the SSL VPN | Client Settings.

The SSL VPN | Client Settings page allows the administrator to configure the client address range information and NetExtender client settings.
The most important being where the SSL-VPN will terminate (eg on the LAN in this case) and which IPs will be given to connecting clients. Finally, select from where users should be able to login (probably, this will be the WAN, so just click on the WAN entry):



NOTE: NetExtender cannot be terminated on an interface that is paired to another interface using L2 Bridge Mode. This includes interfaces bridged with a WLAN interface. Interfaces that are configured with L2 Bridge Mode are not listed in the "SSLVPN Client Address Range" Interface drop-down menu. For NetExtender termination, an interface should be configured with as a LAN, DMZ, WLAN, or a custom Trusted, Public, or Wireless zone, and also configured with the IP Assignment of "Static".

For SonicOS 6.2.x.x and above, first configure the traditional IPv4 IP address pool, and then configure an IPv6 IP Pool. Clients will be assigned two internal addresses: one IPv4 and one IPv6.



Click on Configure icon for Default Device profile :

image



After this on the Settings Tab , we need to select Zone for IP v4 (SSLVPN) and the address object for Network address IPV4 (SSLVPN Pool). The object selected here will have the IP addresses that will be assigned to the clients (Remote users who will try to connect):



image



image

2. The SSL VPN | Client Routes page allows the administrator to control the network access allowed for SSL VPN users. The NetExtender client routes are passed to all NetExtender clients and are used to govern which private networks and resources remote user can access via the SSL VPN connection.


NOTE: All clients can see these routes. Also, here you may enable/disable "Tunnel All Mode" (this is the equivalent of "This gateway only" option while configuring GroupVPN).



image

Configuring NetExtender Client Settings:
Enablethe option Create Client Connection Profile - The NetExtender client will create a connection profile recording the SSL VPN Server name, the Domain name and optionally the username and password.



image



3. The SSL VPN | Portal Settings page is used to configure the appearance and functionality of the SSL VPN Virtual Office web portal. The Virtual Office portal is the website that uses log in to launch NetExtender.



image



4. Under Users | Local users, ensure that the relevant user or user group is a member of the "SSLVPN Services" group:

For Local users , Click on MANAGE and navigate to System Setup | Users | Local Users and Groups 

Click on Configure icon for the user and navigate to groups , add SSLVPN Services 

image



To setup membership for local or LDAP user group, edit the SSLVPN Services user group and add the user group under the Members tab



image

VPN Access Tab:
On the VPN Access Tab allows users to access networks using a VPN tunnel, select one or more networks from theNetworks listand click the arrow button | to move them to the Access List. To remove the user's access to a network, select the network from the Access List, and click the left arrow button



image

5. Under Firewall | Access Rules, note the new SSLVPN zone:



image





6. Firewall access rules are auto-created from and to SSLVPN zone from other zones. Optionally you couldmodify the auto-created SSLVPN to LAN rule to allow access only to those users that are configured (recommended to use single rule with groups rather than multiple rules with individual users). Ignore any warning that login needs to be enabled from SSLVPN zone.

7. Go to WAN interface and ensure HTTPSuser login is enabled (For the same click on MANAGE and navigate to Network | Interfaces , click on cofigure icon for X1 (Default WAN)):



image



How to test this scenario:

  1. Users can now go to the public IP of the sonicwall. Notice the new "click here for SSL login" hyper link:
    image
  2. Users can then login and start netextender:

    NetExtender provides remote users with full access to your protected internal network. The experience is virtually identical to that of using a traditional IPSec VPN client, but NetExtender does not require any manual client installation. Instead, the NetExtender Windows client is automatically installed on a remote user's PC by an ActiveX control when using the Internet Explorer browser, or with the XPCOM plugin when using Firefox.
    On MacOS systems, supported browsers use Java controls to automatically install NetExtender from the Virtual Office portal. Linux systems can also install and use the NetExtender client.

    After installation, NetExtender automatically launches and connects a virtual adapter for secure SSL-VPN point-to-point access to permitted hosts and subnets on the internal network.
    image
    image

    Using the IPv6 address and the service port of the remote server to login.
    image
    Both IPv4 and IPv6 addresses should be distributed on the client.
    image
    And ping can be used to verify the connectivity and functionality as well.

Author: Angelo A Vitale
Last update: 2018-12-11 00:57


SSL-VPN: How to configure LDAP authentication for SSL-VPN Users.

Description

This article outlines all necessary steps to configure LDAP authentication for SSL-VPN users.

Resolution

SSL-VPN Settings

  1. Login to the SonicWall Management GUI
  2. Navigate to the SSL-VPN | Server Settings page.
  3. WAN to enable SSL-VPN on the WAN zone.
    image
  4. Navigate to the SSL VPN | Client Settings page and enter the following information:
    image
  5. Navigate to the Client Routes page and enter the following information:
    image

LDAP Settings

  1. Navigate to the Users | Settings page.
  2. Select LDAP (or LDAP + Local Users) as authentication method and click on Configure.
  3. Enter the following information to configure LDAP authentication: image

    image
  4. In the following screenshot, a group called SSL-VPN Users is being imported. This or a similar group needs to have been created in the AD before performing this action.

    image

User Settings

  1. Navigate to the Users Local Groups page.
  2. Click on configure on the newly imported SSL-VPN Users group.
  3. Under VPN Access tab select LAN Subnets or any other subnets that you wish to allow for this user group.
  4. Click OK to save the settings. image
  5. To make SSL-VPN Users group a member of the SSLVPN Services group, click on Configure on SSLVPN Services and add SSL-VPN Users group as a member.
  6. Click on OKimage
  7. As per the above configuration, only members of the group SSL-VPN Users will be able to connect to SSL-VPN.

Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Creating an Address Object for the SSLVPN IPv4 Address Range


  1. Login to the SonicWall Management GUI.
  2. Click Manage in the top navigation menu
  3. Navigate to Objects | Address Objects and click +Add at the top of the pane.

image

3. In the pop-up window, enter the information for your SSL VPN Range. An example Range is included below:

  • Name: SSL VPN Range TIP: This is only a Friendly Name used for Administration.
  • Zone: SSLVPN
  • Type : RangeNOTE: This does not have to be a range and can be configured as a Host or Network as well
  • Starting IP Address: 192.168.168.100
  • Ending IP Address: 192.168.168.110

image
SSLVPN Configuration

1. Navigate to the SSL-VPN | Server Settings page.

2. Click on the Red Bubble for WAN, it should become Green. This indicates that SSL VPN Connections will be allowed on the WAN Zone.

3. Set the SSL VPN Port, and Domain as desired.

NOTE: The SSLVPN port will be needed when connecting using Mobile Connect and NetExtender unless the port number is 443. Port 443 can only be used if the management port of the firewall is not 443.

NOTE: The Domain is used during the user login process'

TIP: If you want to be able to manage the firewall via GUI or SSH over SSLVPN these features can be enabled separately here as well.

image

4. Navigate to the SSL VPN | Client Settings page.


The SSL VPN | Client Settings page allows the administrator to configure the client address range information and NetExtender client settings, the most important being where the SSL-VPN will terminate (e.g. on the LAN in this case) and which IPs will be given to connecting clients.


CAUTION: NetExtender cannot be terminated on an Interface that is paired to another Interface using Layer 2 Bridge Mode. This includes Interfaces bridged with a WLAN Interface. Interfaces that are configured with Layer 2 Bridge Mode are not listed in the "SSLVPN Client Address Range" Interface drop-down menu. For NetExtender termination, an Interface should be configured as a LAN, DMZ, WLAN, or a custom Trusted, Public, or Wireless zone, and also configured with the IP Assignment of "Static".

5. Click on the Configure button for the Default Device Profile.



image

5. Set the Zone IP V4 as SSLVPN. Set Network Address IP V4 as the Address Object you created earlier (SSLVPN Range).

image
6. The Client Routes tab allows the Administrator to control what network access SSL VPN Users are allowed. The NetExtender client routes are passed to all NetExtender clients and are used to govern which networks and resources remote users can access via the SSL VPN connection.

CAUTION: All SSL VPN Users can see these routes but without appropriate VPN Access on their User or Group they will not be able to access everything shown in the routes. Please make sure to set VPN Access appropriately.

image
7. The Client Settings tab allows the Administrator to input DNS, WINS, and Suffix information while also controlling the caching of passwords, user names, and the behavior of the NetExtender Client.

8. Input the necessary DNS/WINS information and a DNS Suffix if SSL VPN Users need to find Domain resources by name.

9. Enable Create Client Connection Profile - The NetExtender client will create a connection profile recording the SSL VPN Server name, the Domain name and optionally the username and password.image

image

LDAP Settings

  1. Navigate to the Users | Settings page.
  2. Select LDAP (or LDAP + Local Users) as authentication method and click on Configure LDAP.
  3. Click Add to add a new LDAP server.
  4. Enter the Name or IP addressPort Number, and indicate if you wish to Use TLS (SSL). Additionally, you will need to choose if this is the Primary, Secondary or a Backup/replica server. image
  5. On the Login/Bind tab, Select the login type (Anonymous, login name in tree or bind distinguished name) and enter the Login user nameUser tree for login to server and Password if applicable.

    image
  6. On the Directory tab, Make the necessary adjustments to the Trees containing users and the Trees containing user groups or use the AUTO-CONFIGURE button as appropriate.
    image
  7. Click SAVE to save these settings and close the window.
  8. In the LDAP configuration window, access the Users & Groups Tab and Click Import Users.
    image
  9. Select the appropriate LDAP server to import from along with the appropriate domain(s) to include.
    image
  10. Choose the way in which you prefer user names to display.NOTE: This is a personal preference and does not affect 
    image


  11. Select the appropriate users you wish to import and Click Save Selected.NOTE: Make a note of which users or groups that are being imported as you will need to make adjustments to them in the next section of this article.

    image
  12. Click OK in the LDAP configuration window to save these settings and close the window


User Settings

  1. Navigate to the Users Local Users & Groups page.
  2. On the appropriate Local User or Local Groups Tab, Click on configure on the newly imported LDAP User or Group.NOTE: This is dependant on the User or Group you imported in the steps above. If you imported a user, you will configure the imported user, if you have imported a group, you will access the Local Groups tab and configure the imported group.
  3. Under VPN Access tab select the appropriate address objects/groups that your LDAP User or LDAP Group will need access to and click the right arrow to Add Network to Access List.
    image
  4. Click OK to save the settings and close the window.
  5. To make your User or Group a member of the SSLVPN Services group for access to SSLVPN, access the Local Groups tab and click Configure on SSLVPN Services.
  6. On the Members tab, Add your Imported user or group as to the Member Users and Groups.
    image
  7. Click on OK save the settings and close the window.
    https://www.sonicwall.com/en-us/support/knowledge-base/170503844059585

Author: Angelo A Vitale
Last update: 2018-12-11 01:56


How to Configure a Site to Site VPN Policy using Main Mode

Description

This article details how to configure a Site-to-Site VPN using Main Mode, which requires the SonicWall and the Remote VPN Concentrator to both have Static, Public IP Addresses.

Resolution

Step 1: Creating Address Objects for VPN subnets:

1. Login to the SonicWall Management Interface

2. Click Manage in the top navigation menu

3. Navigate to Objects | Address Objects, scroll down to the bottom of the page and click on Add button.

On the NSA 2650

Image 

On the NSA 4600

Image

4. Configure the Address Objects as mentioned in the figure above, click Add and click Close when finished.



Step 2: Configuring a VPN policy on Site A SonicWall

1. Click Manage in the top navigation menu.

2. Navigate to VPN | Base Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 240's WAN IP address).

 TIP: If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in the IPsec Secondary Gateway Name or Address field.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

Image

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)
  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 240 Site (Site B network)

 NOTE: DHCP over VPN is not supported with IKEv2.

Image
4. Click the Proposals Tab.

  • Under IKE (Phase 1) Proposal, select Main Mode from the Exchange menu. Aggressive Mode is generally used when WAN addressing is dynamically assigned. IKEv2 causes all the negotiation to happen via IKE v2 protocols, rather than using IKE Phase 1 and Phase 2. If you use IKE v2, both ends of the VPN tunnel must use IKE v2.
  • Under IKE (Phase 1) Proposal, the default values for DH Group, Encryption, Authentication, and Life Time are acceptable for most VPN configurations. Be sure the Phase 1 values on the opposite side of the tunnel are configured to match. You can also choose AES-128, AES-192, or AES-256 from the Authentication menu instead of 3DES for enhanced authentication security.

 NOTE: The Windows 2000 L2TP client and Windows XP L2TP client can only work with DH Group 2. They are incompatible with DH Groups 1 and 5.

  • Under IPsec (Phase 2) Proposal, the default values for Protocol, Encryption, Authentication, Enable Perfect Forward Secrecy, DH Group, and Lifetime are acceptable for most VPN SA configurations. Be sure the Phase 2 values on the opposite side of the tunnel are configured to match.

Image 

5. Click the Advanced Tab.

  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keepalives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings.

Image 



Step 3: Configuring a VPN policy on Site B SonicWall 

1. Login to the Site B SonicWall appliance and Click Manage in the top navigation menu. Click  VPN | Base Settings page and Click Add button. The VPN Policy window is displayed.

2. Click the General Tab.

  • Select IKE using Preshared Secret from the Authentication Method menu.
  • Enter a name for the policy in the Name field.
  • Enter the WAN IP address of the remote connection in the IPsec Primary Gateway Name or Address field (Enter NSA 4600's WAN IP address).
  • If the Remote VPN device supports more than one endpoint, you may optionally enter a second host name or IP address of the remote connection in the IPsec Secondary Gateway Name or Address field.

 NOTE: Secondary gateways are not supported with IKEv2.

  • Enter a Shared Secret password to be used to setup the Security Association the Shared Secret and Confirm Shared Secret fields. The Shared Secret must be at least 4 characters long, and should comprise both numbers and letters.
  • Optionally, you may specify a Local IKE ID (optional) and Peer IKE ID (optional) for this Policy. By default, the IP Address (ID_IPv4_ADDR) is used for Main Mode negotiations, and the SonicWall Identifier (ID_USER_FQDN) is used for Aggressive Mode.

Image 

3. Click the Network Tab.

  • Under Local Networks, select a local network from Choose local network from list: and select the address object X0 Subnet (LAN Primary Subnet)

 NOTE: DHCP over VPN is not supported with IKEv2.

  • Under Destination Networks, select Choose destination network from list: and select the address object NSA 4600 Site (Site A network)
    Image

4. Click the Proposals Tab.

 NOTE: Settings must be same as Site A.

Image

5. Click the Advanced Tab.
 
  • Select Enable Keep Alive to use heartbeat messages between peers on this VPN tunnel. If one end of the tunnel fails, using Keep alives will allow for the automatic
    renegotiation of the tunnel once both sides become available again without having to wait for the proposed Life Time to expire.
  • Select Enable Windows Networking (NetBIOS) Broadcast to allow access to remote network resources by browsing the Windows® Network Neighborhood.
  • To manage the local SonicWall through the VPN tunnel, select HTTP, HTTPS, or both from Management via this SA. Select HTTP, HTTPS, or both in the User login via this SA to allow users to login using the SA.
  • If you wish to use a router on the LAN for traffic entering this tunnel destined for an unknown subnet, for example, if you configured the other side to Use this VPN Tunnel as default route for all Internet traffic, you should enter the IP address of your router into the Default LAN Gateway (optional) field.
  • - Select an interface or zone from the VPN Policy bound to menu. A Zone WAN is the preferred selection if you are using WAN Load Balancing and you wish to allow the VPN to use either WAN interface.
  • Click OK to apply the settings. 

Image


https://www.sonicwall.com/en-us/support/knowledge-base/170504380887908

Author: Angelo A Vitale
Last update: 2018-12-18 06:30


How To Put the SonicWall into Safe Mode

Description

 This article describes how to put a SonicWall into Safe Mode through the GUI or through the Command Line Interface (CLI). This article will also detail how to upgrade the Firmware or the ROM Version while in Safe Mode as these are the two most common configurations performed using Safe Mode.

 Cause

 Putting the SonicWall into Safe Mode is most commonly required in the following instances:

  • Upgrading the Firmware without access to the GUI/CLI
  • Upgrading the ROM Version
  • Viewing the Bootlogs or other Diagnostic Information
  • Attempting to gain access to an unresponsive device

 Resolution

Putting the SonicWall into Safe Mode

1. Using a paperclip or similarly sized object, press and hold down the RST Button located in the small hole on th front of the device for at least 60 Seconds. Once the Test Light on the device becomes solid or begins to blink then the SonicWall is in Safe Mode.

 NOTE:  Some SonicWall Models, such as certain TZs, have the RST Button located on the back of the device. Consult the Administration or Hardware Guide for your specific SonicWall Model if you're unsure.

2. Connect a computer directly to the following Interface, depending on what model SonicWall you have, via an Ethernet cable:

Generation 5: X0 Interface

Generation 6: Management Interface

3. Manually assign a Static IP / Subnet Mask on the connected computers NIC depending on what model SonicWall you have:

Generation 5: 192.168.168.20 | 255.255.255.0

Generation 6: 192.168.1.20 | 255.255.255.0

4. Open a Web Browser and navigate to the following URL, depending on what model SonicWall you have:

Generation 5: 192.168.168.168

Generation 6: 192.168.1.254

5. You should now be on the Safe Mode GUI and have the following options:

Upload New Firmware and Boot to New Firmware

Boot to Existing Firmware

Run Diagnostics

View Bootlog

View System Information

 Putting the SonicWall into Safe Mode via the CLI

1. If you're unfamiliar with how to access the SonicWall Management using CLI please reference How to login to the appliance using the Command Line Interface (CLI).

2. Once logged into the CLI, input the following commands:

Safemode

yes

3. The SonicWall will reboot and enter Safe Mode.

Image

4. Reference the steps above to login to the Safe Mode GUI, beginning with "Step 2: Connect a computer directly to the following Interface..."

Upgrading the Firmware or ROM Version from Safe Mode

1. Download the desired Firmware version from MySonicWall.com or have the desired ROM Version on hand. ROM Packs are only available via SonicWall Technical Support.

 NOTE: Upgrading the ROM Version only applies to Generation 6 NSA SonicWalls - 2600, 3600, 4600, 5600, and 6600. Unless you have been requested to upgrade the ROM Version by SonicWall Technical Support do not attempt to do so.

2. Select Upload New Firmware and follow the prompt in the pop-up window to upload the Firmware or ROM Version to the SonicWall.

3. You should now see the New Firmware or Uploaded ROM Pack on the Safe Mode GUI. You can boot to the new Firmware or ROM by clicking the Boot icon on the far right.

 NOTE: Booting to a new Firmware or ROM Version will reboot the SonicWall and exit Safe Mode. Make sure you're completely finished with the SonicWall's Safe Mode before selecting Boot.

4. After the reboot, login to the SonicWall Management GUI as you normally would. Navigate to Monitor | Current Status | System Status.

5. On the Status screen you should see the new Firmware Version listed under Firmware Version or the new ROM Version listed under Safemode Version.

Image


https://www.sonicwall.com/en-us/support/knowledge-base/170507123738054

Author: Angelo A Vitale
Last update: 2018-12-18 06:33


How to Configure Bandwidth Management

How to Configure Bandwidth Management

Last Updated: 5/11/2018 23222 Views 659 Users found this article helpful

Description



This article shows the steps needed to configure Bandwidth Management (BWM). SonicOS offers an integrated traffic shaping mechanism through its Interfaces, for both Egress (Outbound) and Ingress (Inbound) traffic. Outbound BWM can be applied to traffic sourced from Trusted and Public Zones (such as LAN and DMZ) destined to Untrusted and Encrypted Zones (such as WAN and VPN). Inbound BWM can be applied to traffic sourced from Untrusted and Encrypted Zones destined to Trusted and Public Zones.

In this article there is a real configuration related to the Bandwidth Management for VoIP traffic from any source to any destination from LAN to WAN for VoIP service.




Resolution




CAUTION: Once BWM has been enabled on an Interface, and a Link Speed has been defined, traffic traversing that link will be throttled both inbound and outbound to the declared values, even if no other settings are configured relating to BWM.



Step 1: Enabling Bandwidth Management (Either Advanced or Global)

1. Navigate to Firewall Settings | BWM. Select either Advanced or Global, depending on your desired configuration.

2. Click Accept.
image




Step 2: Configure Bandwidth Management in WAN Interface

1. Navigate to Network | Interfaces and on the right side of the screen open the Configure menu for the desired WAN Interface.

2. Go to the Advanced tab an Enable both the Ingress and Egress Bandwidth Limitation checkboxes.

3. Input the Ingress and Egress Speeds of your WAN in Kbps. If you're unsure of these values, contact your ISP.

4. Click OK.
image



Step 3: Creating Bandwidth Object

NOTE: If you're using Global BWM then you may skip this step.

1. Navigate to Firewall | Bandwidth Objects and click Add.

2. Add a NameGuaranteed/Maximum BandwidthTraffic Priority, and Violation Action.

3. Click OK.
image

Step 4: Creating or Editing an Access Rule to apply Bandwidth Management

1. Navigate to Firewall | Access Rules and find the Access Rule you'd like to apply BWM to. This will be in a Zone to Zone Format.

2. If a new Access Rule is required, click Add and create the rule by entering the desired Source, Destination, Service, etc. into the fields.

TIP: If you're unfamiliar with setting up Access Rules, please reference How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.

3. On the relevant Access Rule, select the BWM tab.
image

Step 5: Applying the Bandwidth Object or Global BWM Value



1. Select Enable Egress and/or Ingress Bandwidth Management.

2. From the drop down menu select the Global Priority Level for BWM if you're using Global BWM, or select the BWM Object you wish to use if usingAdvanced BWM.

3. Click OK.
image

image


Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

CAUTION: Once BWM has been enabled on an Interface, and a Link Speed has been defined, traffic traversing that link will be throttled both inbound and outbound to the declared values, even if no other settings are configured relating to BWM.

Enabling Bandwidth Management (Either Advanced or Global)

  1. Click Manage in the top navigation menu.
  2. Navigate to Firewall Settings | BWM. Select either Advanced or Global, depending on your desired configuration.
  3. Click Accept to save the settings.

image

Step 2: Configure Bandwidth Management in WAN Interface

1. Navigate to Network | Interfaces and on the right side of the screen open the Configure menu for the desired WAN Interface.
image

2. Go to the Advanced tab an Enable both the Ingress and Egress Bandwidth Limitation checkboxes.
image

3. Input the Ingress and Egress Speeds of your WAN in Kbps. If you're unsure of these values, contact your ISP.

4. Click OK to save the settings and close the window.





Creating Bandwidth Object (Only for Advanced BWM)

  1. Click Manage in the top navigation menu.
  2. Navigate to Objects | Bandwidth Objects and click Add.
    image
  3. Add a NameGuaranteed/Maximum BandwidthTraffic Priority, and Violation Action.
    image
  4. Click OK to save the settings and close the window.



Creating or Editing an Access Rule to apply Bandwidth Management

1. Navigate to Rules | Access Rules and find the Access Rule you'd like to apply BWM to. If a new Access Rule is required. Click configure on the relevant Access Rule or click Add and create the rule by entering the desired Source, Destination, Service, etc. into the fields.

TIP: If you're unfamiliar with setting up Access Rules, please reference How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.
image

2. On the Access rule creation/modification screen, select the BWM tab. On the BWM tab, Enable Egress or Ingress Bandwidth Management, depending on which you wish to enforce and select the appropriate Bandwidth Priority (if Global BWM) or Bandwidth Object (if Advanced BWM).

image image


3. Click OK to save the settings and close the window.


https://www.sonicwall.com/en-us/support/knowledge-base/170521130013462

Author: Angelo A Vitale
Last update: 2018-12-11 00:48


How To Configure Bandwidth Management with limits Per IP

Description

SonicOS Enhanced offers an integrated traffic shaping mechanism through its Egress (outbound) and Ingress (inbound) bandwidth management (BWM) interfaces. Outbound BWM can be applied to traffic sourced from Trusted and Public Zones (such as LAN and DMZ) destined to Untrusted and Encrypted Zones (such as WAN and VPN). Inbound BWM can be applied to traffic sourced from Untrusted and Encrypted Zones destined to Trusted and Public Zones.

This scenario based article describes bandwidth management of traffic from a single or multiple IP addresses using Access Rules.Where we can apply BWM per IP. With this configuration bandwidth can be controller per IP and user can decide on giving guaranteed bandwidth and maximum bandwidth give per IP. This would be helpful in scenarios where a single user might be choking the network and using too much bandwidth and there is no requirement for any specific bandwidth management based any service.

Please note:- This article about the scenario where there is no bandwidth rule already configured and not required. The only requirement is to make sure that no Single IP can use more bandwidth than specified.

Resolution

Step 1:

To view the BWM configuration, navigate to the Firewall Settings | BWM page.

Change the Bandwidth management type from default none to Advance.



image





Step 2:

Enabling Bandwidth Management on the WAN Interface | Advanced tab

BWM configurations begin by enabling BWM on the relevant WAN interface, and declaring the interface’s available bandwidth in Kbps (Kilobits per second). This is performed from the Network | Interfaces page by selecting the Configure icon for the WAN interface, and navigating to the Advanced tab:



image



Enable and define the bandwidth on the interface



image



Egress and Ingress BWM can be enabled jointly or separately on WAN interfaces. Different bandwidth values may be entered for outbound and inbound bandwidth to support asymmetric links. Link rates up to 100,000 Kbps (100Mbit) may be declared on Fast Ethernet interfaces, while Gigabit Ethernet interfaces will support link rates up to 1,000,000 Kbps (Gigabit). The speed declared should reflect the actual bandwidth available for the link. Oversubscribing the link (i.e. declaring a value greater than the available bandwidth) is not recommended.

Note: Once BWM has been enabled on an interface, and a link speed has been defined, traffic traversing that link will be throttled—both inbound and outbound—to the declared values, even if no Access Rules are configured with BWM settings.

Once one or both BWM settings are enabled on the WAN interface and the available bandwidth has been declared, a Bandwidth tab will appear on Access Rules. The BWM tab will present either Inboundsettings, Outbound settings, or both, depending on what was enabled on the WAN interface:

Step 3: Creating Bandwidth Object

This configuration is done using Advance bandwidth management and need bandwidth object to be created. Navigate to Firewall>Bandwidth objects and add an object.

Note:- Medium priority is selected here as it is going to be used for entire network and for all the IP's and not for specific service. By default all the traffic handled by firewall is regarded as Medium priority traffic.

Bandwidth throttling per IP can be defined under Elemental tab.

With that enabled and a limit defined ,say 1200 kbps. No single IP would be able to utilize more then 1200 kbps bandwidth thus giving all users fair share of bandwidth.



image



image





Step 4: Creating Access rule

Navigate to Firewall access | Lan to Wan

Edit the default any to any allow access rule and a new tab BWM will be seen.



image

With this rule, SonicWall will only limit the usage of bandwidth per IP to 1200 kbps and would not actually affect any other service .

How to test.

Log out of SonicWall and test the speed from any pc on LAN .It's max speed will be limited around 1 Mbps.


Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Step 1:

To view the BWM configuration, navigate to the Manage | Security Configuration | Bandwidth Management | BWM page.

Change the Bandwidth management type from default none to Advance.



image



Step 2:

Enabling Bandwidth Management on the WAN Interface | Advanced tab

BWM configurations begin by enabling BWM on the relevant WAN interface, and declaring the interface’s available bandwidth in Kbps (Kilobits per second). This is performed from the Network | Interfaces page by selecting the Configure icon for the WAN interface, and navigating to the Advanced tab:



image

Enable and define the bandwidth on the interface



image

Egress and Ingress BWM can be enabled jointly or separately on WAN interfaces. Different bandwidth values may be entered for outbound and inbound bandwidth to support asymmetric links. Link rates up to 100,000 Kbps (100Mbit) may be declared on Fast Ethernet interfaces, while Gigabit Ethernet interfaces will support link rates up to 1,000,000 Kbps (Gigabit). The speed declared should reflect the actual bandwidth available for the link. Oversubscribing the link (i.e. declaring a value greater than the available bandwidth) is not recommended.

Note: Once BWM has been enabled on an interface, and a link speed has been defined, traffic traversing that link will be throttled—both inbound and outbound—to the declared values, even if no Access Rules are configured with BWM settings.

Once one or both BWM settings are enabled on the WAN interface and the available bandwidth has been declared, a Bandwidth tab will appear on Access Rules. The BWM tab will present either Inboundsettings, Outbound settings, or both, depending on what was enabled on the WAN interface:

Step 3: Creating Bandwidth Object

This configuration is done using Advance bandwidth management and need bandwidth object to be created. Navigate toManage | Objects | Bandwidth objects and add an object.

Note:- Medium priority is selected here as it is going to be used for entire network and for all the IP's and not for specific service. By default all the traffic handled by firewall is regarded as Medium priority traffic.

Bandwidth throttling per IP can be defined under Elemental tab.

With that enabled and a limit defined ,say 1200 kbps. No single IP would be able to utilize more then 1200 kbps bandwidth thus giving all users fair share of bandwidth.



image

image

Step 4: Create Access rule

Navigate to Manage | Policies | Rules | Access Rules | LAN to WAN

Edit the default any to any allow access rule and a new tab BWM will be seen



image

With this rule , SonicWall will only limit the usage of bandwidth per IP to 1200 kbps and would not actually affect any other service .

How to test.

Log out of SonicWall and test the speed from any pc on LAN .It's max speed will be limited around 1 Mbps.


https://www.sonicwall.com/en-us/support/knowledge-base/170505283492802

Author: Angelo A Vitale
Last update: 2018-12-11 00:48


How to Configure Port Address Translation (PAT) or Port Redirection

How to Configure Port Address Translation (PAT) or Port Redirection

Description

This article describes how to change Incoming or Outgoing Ports for any traffic flow that goes through the SonicWall. This process is also known as PAT'ing or Port Address Translation (PAT).

For this process the device can be any of the following:

  • Web Server
  • FTP Server
  • Email Server
  • Terminal Server
  • DVR (Digital Video Recorder)
  • PBX
  • SIP Server
  • IP Camera
  • Printer
  • Application Server
  • Any custom Server Roles
  • Game Consoles

Resolution

Manually translating Ports from a Host on the Internet to a Server, or vice versa, behind the SonicWall using SonicOS involves the following steps:

  1. Creating the necessary Address Objects
  2. Creating the necessary Service Objects
  3. Creating the appropriate PAT Policies which can include Inbound, Outbound, and Loopback
  4. Creating the necessary Firewall Access Rules

These steps will also allow you to enable Port Address Translation with or without altering the IP Addresses involved. If you'd also like to alter the IPs via Network Address Translation (NAT) please see How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.



TIP: The Public Server Wizard is a straightforward and simple way to setup Port Address Translation through the SonicWall. The Public Server Wizard will simplify the above three steps by prompting your for information and creating the necessary Settings automatically.

CAUTION: The SonicWall security appliance is managed by HTTP (Port 80) and HTTPS (Port 443), with HTTPS Management being enabled by default. If you are using one or more of the WAN IP Addresses for HTTP/HTTPS Port Forwarding to a Server then you must change the Management Port to an unused Port, or change the Port when navigating to your Server via NAT or another method.



Scenario Overview

The following walk-through details a request on Port 4000 coming into the SonicWall via the WAN and being forwarded to a Server on the LAN as Port 80 (HTTP). Once the configuration is complete, Internet Users can access the Server via Port 4000. Although the examples below show the LAN Zone and HTTP (Port 80) they can apply to any Zone and any Port that is required. Similarly, the WAN IP Address can be replaced with any Public IP that is routed to the SonicWall, such as a Public Range provided by an ISP.

TIP: If your user interface looks different to the screenshots in this article, you may need to upgrade your firmware to the latest firmware version for your appliance. To learn more about upgrading firmware, please see Procedure to Upgrade the SonicWall UTM Appliance Firmware Image with Current Preferences.


Step 1: Creating the necessary Address Objects

  1. Log into the SonicWall GUI.
  2. Click Network | Address Objects.
  3. Click the Add a new Address object button and create two Address Objects for the Server's Public IP and the Server's Private IP.
  4. Click OK to add the Address Object to the SonicWall's Address Object Table.





Step 2: Creating the necessary Service Object

  1. Click Network | Service Objects.
  2. Click the Add a new Service object button and create the necessary Service Objects for the Ports required.
  3. Ensure that you know the correct Protocol for the Service Object (TCP, UDP, etc.). If you're unsure of which Protocol is in use, perform a Packet Capture.
  4. Click OK to add the Service Object to the SonicWall's Service Object Table.





Step 3: Creating the appropriate PAT Policies which can include Inbound, Outbound, and Loopback


A NAT Policy will allow SonicOS to translate incoming Packets destined for a Public IP Address to a Private IP Address, and/or a specific Port to another specific Port. Every Packet contains information about the Source and Destination IP Addresses and Ports and with a NAT Policy SonicOS can examine Packets and rewrite those Addresses and Ports for incoming and outgoing traffic.

  1. Click Network | NAT Policies.
  2. Click the Add a new NAT Policy button and a pop-up window will appear.
  3. Click Add to add the NAT Policy to the SonicWall NAT Policy Table.

Note: When creating a NAT Policy you may select the "Create a reflexive policy" checkbox". This will create an inverse Policy automatically, in the example below adding a reflexive policy for the NAT Policy on the left will also create the NAT Policy on the right. This option is not available when configuring an existing NAT Policy, only when creating a new Policy.

700
Loopback NAT Policy

A Loopback NAT Policy is required when Users on the Local LAN/WLAN need to access an internal Server via its Public IP/Public DNS Name. This Policy will "loopback" the Users request for access as coming from the Public IP of the WAN and then translate down to the Private IP of the Server. Without a Loopback NAT Policy internal Users will be forced to use the Private IP of the Server to access it which will typically create problems with DNS.

If you wish to access this server from other internal zones using the Public IP address Http://1.1.1.1consider creating a Loopback NAT Policy:

  • Original Source: Firewalled Subnets
  • Translated Source: Example Name Public
  • Original Destination: Example Name Public
  • Translated Destination: Example Name Private
  • Original Service: Example Random Port Object
  • Translated Service: Example HTTP Object
  • Inbound Interface: Any
  • Outbound Interface: Any
  • Comment: Loopback Policy
  • Enable NAT Policy: Checked
  • Create a reflexive policy: Unchecked




Step 4: Creating the necessary Firewall Access Rules

  1. Click Firewall | Access Rules.
  2. Select the View Type as Matrix and select your WAN to Appropriate Zone Access Rule. (This will be the Zone the Private IP of the Server resides on.)
  3. Click the Add a new entry/Add... button and in the pop-up window create the required Access Rule by configuring the fields as shown below.
  4. Click Add when finished.






Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.



Manually translating Ports from a Host on the Internet to a Server, or vice versa, behind the SonicWall using SonicOS involves the following steps:

  1. Creating the necessary Address Objects
  2. Creating the necessary Service Objects
  3. Creating the appropriate PAT Policies which can include Inbound, Outbound, and Loopback
  4. Creating the necessary Firewall Access Rules

These steps will also allow you to enable Port Address Translation with or without altering the IP Addresses involved. If you'd also like to alter the IPs via Network Address Translation (NAT) please see How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall.



TIP: The Public Server Wizard is a straightforward and simple way to setup Port Address Translation through the SonicWall. The Public Server Wizard will simplify the above three steps by prompting your for information and creating the necessary Settings automatically.

  • Click Quick Configuration in the top Navigation menu.

CAUTION: The SonicWall security appliance is managed by HTTP (Port 80) and HTTPS (Port 443), with HTTPS Management being enabled by default. If you are using one or more of the WAN IP Addresses for HTTP/HTTPS Port Forwarding to a Server then you must change the Management Port to an unused Port, or change the Port when navigating to your Server via NAT or another method.

image

Scenario Overview

The following walk-through details a request on Port 4000 coming into the SonicWall via the WAN and being forwarded to a Server on the LAN as Port 80 (HTTP). Once the configuration is complete, Internet Users can access the Server via Port 4000. Although the examples below show the LAN Zone and HTTP (Port 80) they can apply to any Zone and any Port that is required. Similarly, the WAN IP Address can be replaced with any Public IP that is routed to the SonicWall, such as a Public Range provided by an ISP.

TIP: If your user interface looks different to the screenshots in this article, you may need to upgrade your firmware to the latest firmware version for your appliance. To learn more about upgrading firmware, please see Procedure to Upgrade the SonicWall UTM Appliance Firmware Image with Current Preferences.


Step 1: Creating the necessary Address Objects

  1. Log into the SonicWall GUI.
  2. Click Manage in the top navigation menu
  3. Click Objects| Address Objects.
  4. Click the Add a new Address object button and create two Address Objects for the Server's Public IP and the Server's Private IP.
  5. Click OK to add the Address Object to the SonicWall's Address Object Table.

image



Step 2: Creating the necessary Service Object

  1. Click Manage in the top navigation menu.
  2. Click Objects | Service Objects.
  3. Click the Add a new Service object button and create the necessary Service Objects for the Ports required.
  4. Ensure that you know the correct Protocol for the Service Object (TCP, UDP, etc.). If you're unsure of which Protocol is in use, perform a Packet Capture.
  5. Click OK to add the Service Object to the SonicWall's Service Object Table.

image



Step 3: Creating the appropriate PAT Policies which can include Inbound, Outbound, and Loopback


A NAT Policy will allow SonicOS to translate incoming Packets destined for a Public IP Address to a Private IP Address, and/or a specific Port to another specific Port. Every Packet contains information about the Source and Destination IP Addresses and Ports and with a NAT Policy SonicOS can examine Packets and rewrite those Addresses and Ports for incoming and outgoing traffic.

  1. Click Manage in the top navigation menu.
  2. Click Rules | NAT Policies.
  3. Click the Add a new NAT Policy button and a pop-up window will appear.
  4. Click Add to add the NAT Policy to the SonicWall NAT Policy Table.

Note: When creating a NAT Policy you may select the "Create a reflexive policy" checkbox". This will create an inverse Policy automatically, in the example below adding a reflexive policy for the NAT Policy on the left will also create the NAT Policy on the right. This option is not available when configuring an existing NAT Policy, only when creating a new Policy.

image
Loopback NAT Policy

A Loopback NAT Policy is required when Users on the Local LAN/WLAN need to access an internal Server via its Public IP/Public DNS Name. This Policy will "loopback" the Users request for access as coming from the Public IP of the WAN and then translate down to the Private IP of the Server. Without a Loopback NAT Policy internal Users will be forced to use the Private IP of the Server to access it which will typically create problems with DNS.

If you wish to access this server from other internal zones using the Public IP address Http://1.1.1.1consider creating a Loopback NAT Policy:

  • Original Source: Firewalled Subnets
  • Translated Source: Example Name Public
  • Original Destination: Example Name Public
  • Translated Destination: Example Name Private
  • Original Service: Example Random Port Object
  • Translated Service: Example HTTP Object
  • Inbound Interface: Any
  • Outbound Interface: Any
  • Comment: Loopback Policy
  • Enable NAT Policy: Checked
  • Create a reflexive policy: Unchecked

image


Step 4: Creating the necessary Firewall Access Rules

  1. Click Manage in the top navigation menu.
  2. Click Rules| Access Rules.
  3. Select the View Type as Matrix and select your WAN to Appropriate Zone Access Rule. (This will be the Zone the Private IP of the Server resides on.)
  4. Click the Add a new entry/Add... button and in the pop-up window create the required Access Rule by configuring the fields as shown below.
  5. Click Add when finished.

image

https://www.sonicwall.com/en-us/support/knowledge-base/170505515124447

Author: Angelo A Vitale
Last update: 2018-12-11 00:46


How to Configure Static Routes in SonicOS

Description

How to Configure Static Routes in SonicOS Enhanced

Resolution

Video Tutorial: Click here for the video tutorial of this topic

If you have routers on your interfaces and if you want to access the computers attached to the router, you need to configure static routes on the SonicWall security appliance on the Network | Routing page. The static route policies will create static routing entries that make decisions based upon source address, source netmask, destination address, destination netmask, service, interface, gateway and metric.
image

In the above example: a NAT-enabled SonicWall UTM appliance is configured with a LAN IP of 192.168.168.168 / 255.255.255.0 and the computers on the LAN network are on the similar IP range. The IP address of the local router is 192.168.168.254 /24 with the Gateway IP as 192.168.168.168, which connects to another network numbered 10.10.20.x




Configuring Static Routes on SonicOS Enhanced

1. Login to the SonicWall Management Interface
2. Select Network | Routing | click Add button.
image

3. Select the following Route Policy Settings:

- Source = Any
- Under Destination = specify Create New Address Object.

Enter a name for the static route.
Specify the Zone Assignment as LAN.
Specify the Type as Network.
Specify the IP Address 10.10.20.0.
Specify the Netmask 255.255.255.0
Click OK.
- Service = Any 
- Under Gateway = specify Create New Address Object.

Enter a name for the local router.
Specify the Zone Assignment as LAN.
Specify the Type as Host.
Specify the IP Address 192.168.168.254. (i.e Route IP on X0)
Click OK.
- Specify the interface as LAN.
- Specify the metric as 1.
- Click OK.

Notes:

  • The destination network and mask must define a logical subnet which doesn't overlap the LAN subnet. The gateway must be local to the LAN.
  • The router at 192.168.168.254 must have a default route pointing to the firewall's LAN IP address (192.168.168.168) for the secondary subnet to be able to access the internet through the SonicWall's connection.
  • You can also establish static routes for the WAN, DMZ and additional interfaces as applicable, but only if the gateway router involved is a second router, not the main WAN Gateway router, for which you will not need static routes.




Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

If you have routers on your interfaces and if you want to access the computers attached to the router, you need to configure static routes on the SonicWall security appliance on the Network | Routing page. The static route policies will create static routing entries that make decisions based upon source address, source netmask, destination address, destination netmask, service, interface, gateway and metric.
image

In the above example: a NAT-enabled SonicWall UTM appliance is configured with a LAN IP of 192.168.168.168 / 255.255.255.0 and the computers on the LAN network are on the similar IP range. The IP address of the local router is 192.168.168.254 /24 with the Gateway IP as 192.168.168.168, which connects to another network numbered 10.10.20.x




Configuring Static Routes on SonicOS Enhanced

  1. Login to the SonicWall Management Interface
  2. Click Manage in the top navigation menu
  3. Click Network | Routing | Route Policies and click add button.image

3. Select the following Route Policy Settings:

- Source = Any
- Under Destination = specify Create New Address Object.

Enter a name for the static route.
Specify the Zone Assignment as LAN.
Specify the Type as Network.
Specify the IP Address 10.10.20.0.
Specify the Netmask 255.255.255.0
Click OK.
- Service = Any 
- Under Gateway = specify Create New Address Object.

Enter a name for the local router.
Specify the Zone Assignment as LAN.
Specify the Type as Host.
Specify the IP Address 192.168.168.254. (i.e Route IP on X0)
Click OK.
- Specify the interface as LAN.
- Specify the metric as 1.
- Click OK.

Notes:

  • The destination network and mask must define a logical subnet which doesn't overlap the LAN subnet. The gateway must be local to the LAN.
  • The router at 192.168.168.254 must have a default route pointing to the firewall's LAN IP address (192.168.168.168) for the secondary subnet to be able to access the internet through the SonicWall's connection.
  • You can also establish static routes for the WAN, DMZ and additional interfaces as applicable, but only if the gateway router involved is a second router, not the main WAN Gateway router, for which you will not need static routes.
    https://www.sonicwall.com/en-us/support/knowledge-base/170505813100854

Author: Angelo A Vitale
Last update: 2018-12-11 00:48


How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall

Description

This article describes how to access an Internet device or server behind the SonicWall firewall. This process is also known as opening ports, NAT, and Port Forwarding.

For this process the device that is attempting to be accessed can be any of the following:

  • Web Server
  • FTP Server
  • Email Server
  • Terminal Server
  • DVR (Digital Video Recorder)
  • PBX
  • SIP Server
  • IP Camera
  • Printer
  • Application Server
  • Any custom Server Roles
  • Game Consoles




Cause

By default the SonicWall disallows all Inbound Traffic that isn't part of a communication that began from an internal device, such as something on the LAN Zone. This is to protect internal devices from malicious access, however it is often necessary to open up certain parts of a network, such as Servers, to the outside world.

To accomplish this the SonicWall needs a Firewall Access Rule to allow the traffic from the public Internet to the internal network as well as a Network Address Translation (NAT) Policy to direct the traffic to the correct device.

Resolution

Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using SonicOS involves the following steps:

  1. Creating the necessary Address Objects
  2. Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback
  3. Creating the necessary Firewall Access Rules

These steps will also allow you to enable Port Address Translation with or without altering the IP Addresses involved.

TIP: The Public Server Wizard is a straightforward and simple way to provide public access to an internal Server through the SonicWall. The Public Server Wizard will simplify the above three steps by prompting your for information and creating the necessary Settings automatically.

You can learn more about the Public Server Wizard by reading How to open ports using the SonicWall Public Server Wizard.

CAUTION: The SonicWall security appliance is managed by HTTP (Port 80) and HTTPS (Port 443), with HTTPS Management being enabled by default. If you are using one or more of the WAN IP Addresses for HTTP/HTTPS Port Forwarding to a Server then you must change the Management Port to an unused Port, or change the Port when navigating to your Server via NAT or another method.

Scenario Overview

The following walk-through details allowing HTTPS Traffic from the Internet to a Server on the LAN. Once the configuration is complete, Internet Users can access the Server via the Public IP Address of the SonicWall's WAN. Although the examples below show the LAN Zone and HTTPS (Port 443) they can apply to any Zone and any Port that is required. Similarly, the WAN IP Address can be replaced with any Public IP that is routed to the SonicWall, such as a Public Range provided by an ISP.

TIP: If your user interface looks different to the screenshot in this article, you may need to upgrade your firmware to the latest firmware version for your appliance. To learn more about upgrading firmware, please see Procedure to Upgrade the SonicWall UTM Appliance Firmware Image with Current Preferences.

image
Step 1: Creating the necessary Address Objects

  1. Log into the SonicWall GUI.
  2. Click Network | Address Objects.
  3. Click the Add a new Address object button and create two Address Objects for the Server's Public IP and the Server's Private IP.
  4. Click OK to add the Address Object to the SonicWall's Address Object Table.

Step 2: Creating the necessary Service Object

  1. Click Network | Service Objects.
  2. Click the Add a new Service object button and create the necessary Service Objects for the Ports required.
  3. Ensure that you know the correct Protocol for the Service Object (TCP, UDP, etc.). If you're unsure of which Protocol is in use, perform a Packet Capture.
  4. Click OK to add the Service Object to the SonicWall's Service Object Table.

image



Step 3: Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback


A NAT Policy will allow SonicOS to translate incoming Packets destined for a Public IP Address to a Private IP Address, and/or a specific Port to another specific Port. Every Packet contains information about the Source and Destination IP Addresses and Ports and with a NAT Policy SonicOS can examine Packets and rewrite those Addresses and Ports for incoming and outgoing traffic.

  1. Click Network | NAT Policies.
  2. Click the Add a new NAT Policy button and a pop-up window will appear.
  3. Click Add to add the NAT Policy to the SonicWall NAT Policy Table.

Note: When creating a NAT Policy you may select the "Create a reflexive policy" checkbox. This will create an inverse Policy automatically, in the example below adding a reflexive policy for the NAT Policy on the left will also create the NAT Policy on the right. This option is not available when configuring an existing NAT Policy, only when creating a new Policy.

image
Loopback NAT Policy

A Loopback NAT Policy is required when Users on the Local LAN/WLAN need to access an internal Server via its Public IP/Public DNS Name. This Policy will "Loopback" the Users request for access as coming from the Public IP of the WAN and then translate down to the Private IP of the Server. Without a Loopback NAT Policy internal Users will be forced to use the Private IP of the Server to access it which will typically create problems with DNS.

If you wish to access this server from other internal zones using the Public IP address Http://1.1.1.1consider creating a Loopback NAT Policy:

  • Original Source: Firewalled Subnets
  • Translated Source: X1 IP
  • Original Destination: X1 IP
  • Translated Destination: Example Name Private
  • Original Service: HTTPS
  • Translated Service: Original
  • Inbound Interface: Any
  • Outbound Interface: Any
  • Comment: Loopback policy
  • Enable NAT Policy: Checked
  • Create a reflexive policy: Unchecked

image
Step 3: Creating the necessary Firewall Access Rules

  1. Click Firewall | Access Rules.
  2. Select the View Type as Matrix and select your WAN to Appropriate Zone Access Rule. (This will be the Zone the Private IP of the Server resides on.)
  3. Click the Add a new entry/Add... button and in the pop-up window create the required Access Rule by configuring the fields as shown below.
  4. Click Add when finished.

image




Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.





Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using SonicOS involves the following steps:

  1. Creating the necessary Address Objects
  2. Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback
  3. Creating the necessary Firewall Access Rules

These steps will also allow you to enable Port Address Translation with or without altering the IP Addresses involved.

TIP: The Public Server Wizard is a straightforward and simple way to provide public access to an internal Server through the SonicWall. The Public Server Wizard will simplify the above three steps by prompting your for information and creating the necessary Settings automatically.

Click Quick Configuration in the top navigation menu.

You can learn more about the Public Server Wizard by reading How to open ports using the SonicWall Public Server Wizard.

CAUTION: The SonicWall security appliance is managed by HTTP (Port 80) and HTTPS (Port 443), with HTTPS Management being enabled by default. If you are using one or more of the WAN IP Addresses for HTTP/HTTPS Port Forwarding to a Server then you must change the Management Port to an unused Port, or change the Port when navigating to your Server via NAT or another method.

image

Scenario Overview

The following walk-through details allowing HTTPS Traffic from the Internet to a Server on the LAN. Once the configuration is complete, Internet Users can access the Server via the Public IP Address of the SonicWall's WAN. Although the examples below show the LAN Zone and HTTPS (Port 443) they can apply to any Zone and any Port that is required. Similarly, the WAN IP Address can be replaced with any Public IP that is routed to the SonicWall, such as a Public Range provided by an ISP.

TIP: If your user interface looks different to the screenshot in this article, you may need to upgrade your firmware to the latest firmware version for your appliance. To learn more about upgrading firmware, please see Procedure to Upgrade the SonicWall UTM Appliance Firmware Image with Current Preferences.


Step 1: Creating the necessary Address Objects

  1. Log into the SonicWall GUI.
  2. Click Manage in the top navigation menu.
  3. Click Objects | Address Objects.
  4. Click the Add a new Address object button and create two Address Objects for the Server's Public IP and the Server's Private IP.
  5. Click OK to add the Address Object to the SonicWall's Address Object Table.

image



Step 2: Creating the necessary Service Object

  1. Click Manage in the top navigation menu
  2. Click Objects | Service Objects.
  3. Click the Add a new Service object button and create the necessary Service Objects for the Ports required.
  4. Ensure that you know the correct Protocol for the Service Object (TCP, UDP, etc.). If you're unsure of which Protocol is in use, perform a Packet Capture.
  5. Click OK to add the Service Object to the SonicWall's Service Object Table.

image



Step 3: Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback


A NAT Policy will allow SonicOS to translate incoming Packets destined for a Public IP Address to a Private IP Address, and/or a specific Port to another specific Port. Every Packet contains information about the Source and Destination IP Addresses and Ports and with a NAT Policy SonicOS can examine Packets and rewrite those Addresses and Ports for incoming and outgoing traffic.

  1. Click Manage in the top navigation menu.
  2. Click Rules | NAT Policies.
  3. Click the Add a new NAT Policy button and a pop-up window will appear.
  4. Click Add to add the NAT Policy to the SonicWall NAT Policy Table.

Note: When creating a NAT Policy you may select the "Create a reflexive policy" checkbox. This will create an inverse Policy automatically, in the example below adding a reflexive policy for the NAT Policy on the left will also create the NAT Policy on the right. This option is not available when configuring an existing NAT Policy, only when creating a new Policy.

image
Loopback NAT Policy

A Loopback NAT Policy is required when Users on the Local LAN/WLAN need to access an internal Server via its Public IP/Public DNS Name. This Policy will "Loopback" the Users request for access as coming from the Public IP of the WAN and then translate down to the Private IP of the Server. Without a Loopback NAT Policy internal Users will be forced to use the Private IP of the Server to access it which will typically create problems with DNS.

If you wish to access this server from other internal zones using the Public IP address Http://1.1.1.1consider creating a Loopback NAT Policy:

  • Original Source: Firewalled Subnets
  • Translated Source: X1 IP
  • Original Destination: X1 IP
  • Translated Destination: Example Name Private
  • Original Service: HTTPS
  • Translated Service: Original
  • Inbound Interface: Any
  • Outbound Interface: Any
  • Comment: Loopback policy
  • Enable NAT Policy: Checked
  • Create a reflexive policy: Unchecked

image
Step 3: Creating the necessary Firewall Access Rules

  1. Click Manage in the top navigation menu.
  2. Click Rules | Access Rules.
  3. Select the View Type as Matrix and select your WAN to Appropriate Zone Access Rule. (This will be the Zone the Private IP of the Server resides on.)
  4. Click the Add a new entry/Add... button and in the pop-up window create the required Access Rule by configuring the fields as shown below.
  5. Click Add when finished.

image

Author: Angelo A Vitale
Last update: 2018-12-11 00:51


How to override the MAC Address of the WAN Interface on SonicWall (With video tutorial)

Description

How to override the MAC Address of the WAN Interface on SonicWall (With video tutorial)

Resolution

Problem Definition:

Some cable Internet providers use the ethernet hardware (MAC) address of the customer's computer in order to assign an IP address and grant access to the Internet. Follow these steps to clone (proxy) the MAC address of the computer previously connected directly to the provider onto the SonicWall's WAN so it may be recognized by the provider:


Video Tutorial: Click here for the video tutorial of this topic.

Resolution or Workaround:

1. Connect the computer to your cable modem and go online.
2. Disconnect this computer from the modem and plug it into the LAN port of the SonicWall. Make sure no other devices are connected to the SonicWall at this time.
3. Connect the WAN port of the SonicWall to the modem.
4. Log into the SonicWall management interface.
5. Select Network > Settings (SonicOS Standard) or Network > Interfaces (SonicOS Enhanced).
6. Click the configure icon for the WAN interface.
7. Select the Ethernet tab(SonicOS Standard) or Advanced tab (SonicOS Enhanced).
8. Check the option "Proxy management workstation Ethernet address on WAN" for standard firmware and Override Default MAC address for enhanced firmware.

Note: For enhanced firmware , find the MAC address / Physical address of the machine that can access internet when connected directly to the modem / router and enter that in the text box for Override Default MAC Address.

image

9. Click OK.
10. Power off the cable modem, SonicWall and your computer.
11. Power on the Cable modem first. Once it's ready, power on the SonicWall and finally your computer.
12. Try to go online.

You should now find that you are able to access the Internet as expected.

Author: Angelo A Vitale
Last update: 2018-12-11 00:53


Integrating LDAP/Active Directory with SonicWall UTM Appliance

Description

This article covers how to integrate LDAP/Active Directory with a SonicWall firewall.

Resolution

1. Go to Users | Settings page
In the Authentication method for login drop-down list, select LDAP + Local Users and Click Configure

image



If you are connected to your SonicWall appliance via HTTP rather than HTTPS, you will see a dialog box warning you of the sensitive nature of the information stored in directory services and offering to change your connection to HTTPS. If you have HTTPS management enabled for the interface to which you are connected (recommended), check the “Do not show this message again” box and click Yes.

2. On the Settings tab of the LDAP Configuration window, configure the following fields 

Name or IP address: The FQDN or the IP address of the LDAP server against which you wish to authenticate. If using a name, be certain that it can be resolved by your DNS server. IP address 
of the LDAP server .

Port Number: The default LDAP over TLS port number is TCP 636. The default LDAP (unencrypted) port number is TCP 389. If you are using a custom listening port on your LDAP server,
specify it here.

Server timeout (seconds): The amount of time, in seconds, that the SonicWall will wait for a response from the LDAP server before timing out. Allowable ranges are 1 to 99999, with a default of 10 seconds.
Overall operation timeout (minutes): 5(Default)

Anonymous Login – Some LDAP servers allow for the tree to be accessed anonymously. If your server supports this (Active Directory generally does not), then you may select this option.

Login User Name – Specify a user name that has rights to log in to the LDAP directory. The login name will automatically be presented to the LDAP server in full ‘dn’ notation. 
This can be any account with LDAP read privileges (essentially any user account) – Domain Administrative privileges are required. Note that this is the user’s display name, not their login ID.

Login Password – The password for the user account specified above.
Protocol Version – Select either LDAPv3 or LDAPv2. Most modern implementations of LDAP, including Active Directory, employ LDAPv3.

Use TL(SSL) : Use Transport Layer Security (SSL) to log in to the LDAP server. 


3. On the Directory tab, configure the following fields:

Primary domain: The user domain used by your LDAP implementation
User tree for login to server: The location of where the tree is that the user specified in the settings tab
Click on Auto-configure
Select Append to Existing trees and Click OK

image



This will populate the Trees containing users and Trees containing user groups fields by scanning through the directories in search of all trees that
contain user objects.

4. On the Schema tab, configure the following fields:

LDAP Schema: Microsoft Active Directory



5. On the LDAP Users tab, configure the following fields:

Default LDAP User Group : Trusted Group


How to Test:

On the LDAP Test tab, Test a Username and Password in Active directory to make sure that the communication is successful.

Author: Angelo A Vitale
Last update: 2018-12-11 01:00


Failover & Load Balancing

Primary WAN Ethernet Interface has the same meaning as the previous firmware’s concept of “Primary WAN.” It is the highest ranked WAN interface in the LB group. The Alternate WAN #1corresponds to “Secondary WAN,” it has a lower rank than the Primary WAN, but has a higher rank than the next two alternates. The others, Alternate WAN #2 and Alternate WAN #, are new, with Alternate WAN # being the lowest ranked among the four WAN members of the LB group.
The Failover and Load Balancing settings are described below:

Enable Load Balancing—This option must be enabled for the user to access the LB Groups and LB Statistics section of the Failover & Load Balancing configuration. If disabled, no options for Failover & Load Balancing are available to be configured.
Respond to Probes—When enabled, the appliance can reply to probe request packets that arrive on any of the appliance’s interfaces.
Any TCP-SYN to Port—This option is available when the Respond to Probes option is enabled. When selected, the appliance will only respond to TCP probe request packets having the same packet destination address TCP port number as the configured value.


Load Balancing Members and Groups
LB Members added to a LB Group take on certain “roles.” A member can only work in one of the following roles:

Primary—Only one member can be the Primary per Group. This member always appears first or at the top of the Member List. Note that although a group can be configured with an empty member list, it is impossible to have members without a Primary.
Alternate—More than one member can be an Alternate, however, it is not possible to have a Group of only Alternate members.
Last-Resort—Only one member can be designed as Last-Resort. Last-Resort can only be configured with other group members.

Each member in a group has a rank. Members are displayed in descending order of rank. The rank is determined by the order of interfaces as they appear in the Member List for the group. The order is important in determining the usage preferences of the Interfaces, as well as the level of precedence within the group. Thus, no two interfaces within a group will have the same or equal rank; each Interface will have a distinct rank.
General Tab
To configure the Group Member Rank settings:

1 Click theConfigure icon of the Group you wish to configure on the Network > Failover & LB page. The Edit LB Group dialog displays.

2 On theGeneral tab, modify the following settings:
Name—Edit the display name of the Group. The name of the default group cannot be changed.
Type—Choose the type (or method) of LB from the drop-down menu:
Basic Failover—The four WAN interfaces use rank to determine the order of preemption when the Preempt checkbox has been enabled. Only a higher-ranked interface can preempt an Active WAN interface.
Preempt and failback to preferred interfaces when possible—Select to enable rank to determine the order of preemption when Basic Failover is specified. Selected by default.
Round Robin—This option now allows the user to re-order the WAN interfaces for Round Robin selection. The order is as follows: Primary WAN, Alternate WAN #1, Alternate WAN #2, and Alternate WAN #3; the Round Robin will then repeat back to the Primary WAN and continue the order.
Spill-over—The bandwidth threshold applies to the Primary WAN. Once the threshold is exceeded, new traffic flows are allocated to the Alternates in a Round Robin manner. Once the Primary WAN bandwidth goes below the configured threshold, Round Robin stops, and outbound new flows will again be sent out only through the Primary WAN.
* NOTE: Existing flows will remain associated with the Alternates (as they are already cached) until they timeout normally.
When bandwidth exceedsn Kbit/s on Primary, new flows will go to the alternate group members in Round Robin manner—Specify the bandwidth for the Primary. If this value is exceeded, new flows are then sent to alternate group members according to the order listed in the Selected column. The default value is 0.
Ratio—A percentages can be set for each WAN in the LB group. To avoid problems associated with configuration errors, ensure that the percentage corresponds correctly to the WAN interface it indicates.
Use Source and Destination IP Address binding—This option is not selected by default.
Group Members: Select here:/Selected:—Add, delete, and order member interfaces. The use of the selected members depends on the Type selected:
Basic FailoverInterface Ordering:
Round RobinInterface Pool:
Spill-over: Primary/Alt. Pool:
RatioInterface Distribution:
3 Add members by selecting a displayed interface from theGroup Members: column, and then clicking the Add>> button. You can order the entries in the Selected column by selecting an entry and then clicking the  buttons. If you selected Ratio, instead of ordering the entries, you can specify the ratio of each group member in the % field. The total should add up to 100%. You can modify the ratio by clicking the Modify Ratio button or have the ratios adjusted automatically by clicking the Auto Adjust button.

Delete members from theSelected: column by selecting the displayed interface, and then clicking thebutton.

* NOTE: The interface at the top of the list is the Primary.
The Interface Rank does not specify the operation that will be performed on the individual member. The operation that will be performed is specified by the Group Type.
4 Optionally, enter this setting:
Final Back-Up—An entry in this setting is an interface of “last resort,” that is, an interface that is used only when all other interfaces in the Selected: group are either unavailable or unusable. To specify a Final Back-Up interface, select an entry in the Group Members list, and then click the double right arrow  button. To remove a Final Back-Up interface, click the double left arrow  button.
5 ClickOK.

Probing Tab
When Logical probing is enabled, test packets can be sent to remote probe targets to verify WAN path availability. A new option has been provided to allow probing through the additional WAN interfaces: Alternate WAN #3 and Alternate WAN #4.

* NOTE: VLANs for alternate WANs do not support QoS or VPN termination.

To configure the probing options for a specific Group:

1 Click theConfigure icon of the Group you wish to configure on the Network > Failover & LBpage. the Edit LB Group dialog displays.


2 Click theProbing tab.


3 Modify the following settings:
Check Interface every:n sec —The interval of health checks in units of seconds. The default value is 5 seconds.
Deactivate Interface after:n missed intervals—The number of failed health checks after which the interface sets to Failover. The default value is 3 seconds.
Reactivate Interface after:n successful intervals—The number of successful health checks after which the interface sets to Available. The default value is 3 seconds.
Probe responder.global.sonicwall.com on all interfaces in this group—Enable this checkbox to automatically set Logical/Probe Monitoring on all interfaces in the Group. When enabled, this sends TCP probe packets to the global SNWL host that responds to SNWL TCP packets, responder.global.sonicwall.com, using a target probe destination address of 204.212.170.23:50000. When this checkbox is selected, the rest of the probe configuration will enable built-in settings automatically. The same probe will be applied to all four WAN Ethernet interfaces.
* NOTE: The Dialup WAN probe setting also defaults to the built-in settings.

http://help.sonicwall.com/help/sw/eng/9410/26/2/3/content/Network_WAN_Failover.032.2.htm

Author: Angelo A Vitale
Last update: 2018-12-11 01:01


SonicWall – Opening a non-standard port

See Attached File

Author: Angelo A Vitale
Last update: 2018-12-11 01:36


Configuring the DHCP Server on the SonicWall

Description

 

The SonicWall security appliance includes a DHCP (Dynamic Host Configuration Protocol) server to distribute IP addresses, subnet masks, gateway addresses, and DNS server addresses to your network clients. The Network > DHCP Server page includes settings for configuring the SonicWall security appliance’s DHCP server. You can use the SonicWall security appliance’s DHCP server or use existing DHCP servers on your network. If your network uses its own DHCP servers, make sure the Enable DHCP Server checkbox is unchecked.


The number of address ranges and IP addresses the SonicWall DHCP server can assign depends on the model, operating system, and licenses of the SonicWall security appliance. The table below shows maximum allowed DHCP leases for SonicWall security appliances.

Gen5 Model DHCP Leases Gen6 Model DHPC Leases
SM 10200/10400/10800  16384 SM 9600 16384
NSA E8500  16384 SM 9400 16384
NSA E7500  16384 SM 9200 16384
NSA E6500  16384 NSA 6600 16384
NSA E5500  8192 NSA 5600 8192
NSA E5000  8192 NSA 4600 8192
NSA 4500  8192 NSA 3600 4096
NSA 3500  4096 NSA 2600 4096
NSA 2400  4096 TZ 600 4096
NSA 240 4096 TZ 500(W) 4096
TZ 210(W) 4096 TZ 400(W) 4096
TZ 200(W)  4096 TZ 300(W) 4096
TZ 100(W)  1024 SOHO (W) 4096

 

 

 

Resolution

Procedure:

Complete the following steps to configure the SonicWall DHCP server for the LAN, DMZ or other network zone on a SonicWall firewall (UTM) appliance running SonicOS Enhanced or Standard firmware.

Enable DHCP Server

1.Click on Manage on the top bar
2. Please navigate to NETWORK | DHCP SERVER 
3. Check Enable DHCP Server. Alert: Make sure there are no other DHCP servers on the LAN before you enable the SonicWall's DHCP server.
4. Optionally, check Enable Conflict Detection. This check box is on by default.
5Enable DHCP Server Persistence to provide clients with a predictable IP address that does not conflict with another use on the network, even after a client reboot.

 

Image

 

Dynamic DHCP Address Assignments

  1. Click the Add Dynamic button to add a new dynamic entry button.
  2. Check Enable this DHCP Scope checkbox. This check box is on by default.
  3. Select the interface to which this DHCP scope applies.
  4. Enter the beginning IP address of the desired IP address range in the Range Start field. Enter the ending IP address in the Range End field.Enter the maximum length of the DHCP lease in the Lease Time field. The Lease Time determines how often the DHCP Server renews IP leases. The default Lease Time is 1440 minutes (24 hours). The time length of the lease can range from 1 to 9999 minutes.
  5. Select the gateway IP address that will be assigned to DHCP clients using the Gateway Preferences and Default Gateway fields. The available choices in the Gateway Preferences list depend on the IP addresses assigned to the SonicWall's interfaces. Select the "Other" gateway preference to manually enter an address into the Default Gateway field. If configuring the DHCP server for the LAN, select the gateway address used by LAN computers to access the Internet in the Gateway Preferences list. If a default gateway address has been manually entered, supply the correct value in the Subnet Mask field.
  6. Select the Allow BootP clients to use range check box if you want BootP clients to receive DHCP leases.

Image

 

  1. Click the DNS/WINS tab.
  2. Enter the domain name registered for your network in the Domain Name field. An example of a domain name is "your-domain.com". If you do not have a domain name, leave this field blank.
  3. Specify the DNS settings to be assigned to DHCP clients. Select the Inherit DNS Settings Dynamically from the SonicWall's DNS settings radio button to use the DNS servers that you specified on the Network | Settings page in SonicOS Standard firmware or the Network | Interfaces and Network | DNS pages in SonicOS Enhanced. Select the Specify Manually radio button to enter your own DNS servers into the DNS Server 1, DNS Server 2 and DNS Server 3 fields.
  4. Specify the WINS servers, if any, to be assigned to DHCP clients. Enter the IP addresses of the WINS servers on your network in the WINS Server 1 and WINS Server 2 fields. If you do not have a WINS server, leave these fields blank.
  5. Click OK.
Image

Continue this process until you have added or modified all the desired dynamic DHCP address ranges.

Alert: The DHCP Server does not assign an IP address from the dynamic range if the address is already being used by a computer on your LAN.

 

Static DHCP Address Assignments

  1. Select Network | DHCP Server.
  2. Check Enable DHCP ServerAlert: Make sure there are no other DHCP servers on the LAN before you enable the SonicWall's DHCP server.
  3. Optionally, check Enable Conflict Detection. This check box is on by default.
  4. Click the Edit icon next to an existing DHCP server lease scope or click the Add a new static entry button.
  5. Check Enable this DHCP Scope. This check box is on by default.
  6. Select the interface to which this static DHCP address assignment applies.
  7. Enter a label for the static DHCP assignment in the Entry Name field.
  8. Static IP addresses should be assigned to servers that require permanent IP settings. Enter the IP address assigned to your workstation or server in the Static IP Address field.
  9. Select New MAC Address from the list in the Ethernet Address field and enter the Ethernet (MAC) address of your workstation or server.
  10. Enter the maximum length of the DHCP lease in the Lease Time field. The Lease Time determines how often the DHCP Server renews IP leases. The default Lease Time is 1440 minutes (24 hours). The time length of the lease can range from 1 to 9999 minutes.
  11. Select the gateway IP address that will be assigned to DHCP clients using the Gateway Preferences and Default Gateway fields. The available choices in the Gateway Preferences list depend on the IP addresses assigned to the SonicWall's interfaces. Select the "Other" gateway preference to manually enter an address into the Default Gateway field. If configuring the DHCP server for the LAN, select the gateway address used by LAN computers to access the Internet in the Gateway Preferences list. If a default gateway address has been manually entered, supply the correct value in the Subnet Mask field.

 

Image

 

  1. Click the DNS/WINS tab.
  2. Enter the domain name registered for your network in the Domain Name field. An example of a domain name is "your-domain.com". If you do not have a domain name, leave this field blank.
  3. Specify the DNS settings to be assigned to DHCP clients. Select the Inherit DNS Settings Dynamically from the SonicWall's DNS settings radio button to use the DNS servers that you specified on the Network | Settings page in SonicOS Standard firmware or the Network | Interfaces and Network | DNS pages in SonicOS Enhanced. Select the Specify Manually radio button to enter your own DNS servers into the DNS Server 1, DNS Server 2 and DNS Server 3 fields.
  4. 15. Specify the WINS servers, if any, to be assigned to DHCP clients. Enter the IP addresses of the WINS servers on your network in the WINS Server 1 and WINS Server 2 fields. If you do not have a WINS server, leave these fields blank.
  5. Click OK.

Continue this process until you have added all the desired static entries.

NOTES:

  • Make sure there are no other DHCP servers on the LAN before you enable the DHCP server.
  • The SonicWall DHCP Server does not assign an IP address from the dynamic range if the address is already being used by a computer on your LAN.
  • In Firmware 6.X, the SonicWall DHCP server can assign a total of 254 dynamic and static IP addresses. SonicOS Standard and Enhanced can assign more addresses depending on the hardware model in use. For example, a Pro 4060 running SonicOS Enhanced can assign up to 4,096 addresses.https://www.sonicwall.com/en-us/support/knowledge-base/170505423294713

Author: Angelo A Vitale
Last update: 2018-12-17 19:09


How to create a static DHCP entry in the SonicWall UTM appliance

Description

 If you want to use the SonicWall security appliance’s DHCP server, select Enable DHCP Server on the Network | DHCP Server page. Select Enable Conflict Detection to turn on automatic DHCP scope conflict detection on each zone.

Static entries are IP addresses assigned to servers requiring permanent IP settings. Because SonicOS Enhanced allows multiple DHCP scopes per interface, there is no requirement that the subnet range is attached to the interface when configuring DHCP scopes.

 

Resolution

To configure static entries, follow these steps:

SonicOS Standard and Enhanced:

  1. Login to the SonicWall ,click on MANAGE 
  2. Navigate to the Network | DHCP Server page.
  3. Make sure "Enable DHCP Server" is checked.
  4. Click on the Add Static button to bring up the Static DHCP Configuration window.
  5. Check "Enable this DHCP Scope/Entry".
  6. As prompted, enter a name for this mapping, the computer's assigned IP address, the corresponding MAC address, etc.
  7. Click OK.

 

Image

https://www.sonicwall.com/en-us/support/knowledge-base/170504925446054

Author: Angelo A Vitale
Last update: 2018-12-17 19:13


How to configure 3G/4G dialup modems for WAN Failover

Description

 This article provides information on how to configure 4G/LTE dialup modems for WAN Failover.

SonicOS supports WAN connections using 4G/LTE Wireless modems over Cellular data networks.

  •   Support WAN failover for when the primary WAN has failed.
  •   Support mobile networks where primary wire-based WAN connection is not available, such as in a trade show or kiosks or in a vehicle.
  •   Supported 4G/LTE modem list - Wireless Broadband Cards and Devices (Wireless WAN USB devices) supported
  •   Support for USB dialup modem for locations with no cellular data service or where cellular data is expensive.

Gen6 firewalls that support 4G/LTE:

  • SOHO
  • TZ300-TZ600
  • NSA 2650
  • NSA 3600-NSA 6600

Image

 

Resolution

  1. Choose a USB cellular modem and plug modem into the Sonicwall USB port.
  2. The Sonicwall’s approved list is Wireless Broadband Cards and Devices (Wireless WAN USB devices) supported. For example, in the US an approved USB 3G modem is the Sprint U760.

  3. Go to the 3G/4G modem Setting page, for the 3G/4G Device Type select 3G/4G/Mobile and click the Accept button

 

 

Image

Go to the 3G/4G | Connection Profiles page

      • Click on the Add button to add a connection profile.

      • Create profiles for various devices that maybe used such as USB 3G/4G cellular modems or USB analog modems.

        Image
  1. Go to 3G/4G/Modem | Status page.
    • When there is no modem installed, the status is No device was detected.
    • When a supported 3G/4G modem is detected, it will show the modem’s information

Dialup Modem Selection

In some environments there are no cellular data networks or maybe the cellular data network is not reachable, to handle this scenario SonicOS has support for an USB analog modem. The configuration to set up a secondary WAN connection using an analog phone line is similar to a 3G/4G connection.

  • Go to the 3G/4G modem Setting page:
    • For the 3G/4G Device Type select Analog Modem and click the Accept button.
    • The Modem Settings will show up.
    • Under Modem Settings, select the country in Initialize Modem Connection For Use in: or enter the modem commands in Initialize Modem Connection Using AT commands. Select the Accept button to initialize the modem.

Now you will configure the Interface Settings:

  1. Check the USB interface in Network | Interfaces.

     NOTE: On some Sonicwall appliances there are two USB ports, a modem connected on the USB port 1 will show up as interface U0 (Top) and a modem connected on USB port 2 will show up as interface U1(Bottom). If there is only one USB port then the modem will show up as U0 only.

    Image 

 NOTE: The Information marked as "Auto Populate" are usually auto-populated. However this depends on your profile data from your provider. The information above are only used as example.

 
56K profile example

In the 56K modem profile, under General Setting, enter the service provider dialup access account numbers (primary and secondary). The User Name is a user on the SonicWall appliance and its password.

Image

 Configure Primary WAN to Secondary WAN Failover

  1. Go to the Network | Failover & LB page
  2. Load Balancing and Respond to Probes is on by default. A default group called Default LB Group was already created. Click Configure button to edit.
    • In Type select Basic Failover: it will fail over to the backup when the primary is down.
    • Accept the default for Preempt and fail back to preferred interface when possible: this selection allows the WAN to fail back the primary WAN when possible.
    • Select the default WAN interface and add it under Interface Ordering, for SonicWall the X1 interface is the default WAN. Select 3G/4G modem (U0 or U1 interface) and add it under Final Back-Up: this tells SonicOS that the default WAN interface X1 is the primary WAN and if the primary WAN is down, use the Secondary backup WAN interface U0/U1. The Preempt and fallback check box tells SonicOS to fall back to the primary WAN when possible.
  3. Demonstrate Primary WAN to Secondary WAN (U0/U1) failover.

In this example, both the Primary WAN and Secondary WAN are connected. Since the Primary WAN has priority, it is chosen over the Secondary WAN. In this example, Primary WAN X1 has IP 10.50.20.108 and the Secondary WAN has IP of 0.0.0.0

Image

  • Look at a trace route to www.yahoo.com. The Sonicwall is using the Primary WAN X1 because the Secondary WAN has a lower priority and is down.
  • Now bring down the Primary WAN, takes a moment for the Secondary WAN to enable. When the Secondary WAN is up, a trace route to www.yahoo.com goes out using the Secondary WAN interface U0
  • Look at a trace route to www.yahoo.com again. Now the Sonicwall is using the Secondary WAN to go through Sprint Cellular.
    https://www.sonicwall.com/en-us/support/knowledge-base/170505961420387

Author: Angelo A Vitale
Last update: 2018-12-17 19:14


How to configure the SonicWall WAN / X1 Interface with Static IP address

Description

 Configuring the SonicWall WAN interface (X1 by default) with Static IP address provided by the ISP. (Other WAN configuration: DHCPPPPoEPPTP or L2TP)

Example:-

In this article we are using the following IP addresses provided by the ISP:

WAN IP: 204.180.153.105
Subnet Mask: 255.255.255.0
Default Gateway: 204.180.153.1
DNS Server 1: 4.2.2.1
DNS Server 2: 4.2.2.2

 

Resolution

Static Mode: This mode is used if the ISP has assigned a static IP address. To configure this mode, follow these steps:


1. Login to the SonicWall Management Interface. (If you are configuring the SonicWall for the first time, the default Lan IP is http://192.168.168.168)

2.Once you are logged into SonicWall , please click on MANAGE option on the top bar and then please navigate to NETWORK | Interfaces 

Image

3. Click Configure for the WAN interface (X1 by default.), the Edit Interface window is displayed.

 

Image

4. Under IP address, choose static from the drop down menu. Enter the static IP address and Subnet Mask given by the ISP.

 

Image

 

5. Enter the Default Gateway given by the ISP. 
 6. Under the DNS server settings, enter the DNS server IP address given by the ISP.
 7. If you want to enable remote management of the SonicWall security appliance from this interface, select the supported management protocol(s): 
     HTTP, HTTPS, SSH, Ping, SNMP, and/or SSH.
 8. If you want to allow selected users with limited management rights to log in to the security appliance, select HTTP and/or HTTPS in User Login.
 9. Click OK and check to see if the settings have been updated.

How to test the connectivity:

1. On the SonicWall, click please click on INVESTIGATE  option on the top bar and then please navigate to TOOLS | SYSTEM DIAGNOSTICS 
2. Ping your ISP’s Default Gateway or any IP that is pingable on the Internet (e.g. 4.2.2.2).
3. Also try to ping a website (eg: www.google.com) to ensure that the DNS resolution is working.

 

Image

 https://www.sonicwall.com/en-us/support/knowledge-base/170503917481882

Author: Angelo A Vitale
Last update: 2018-12-17 19:15


How to configure 3G/4G dialup modems for WAN Failover

Description

 

This article provides information on how to configure 4G/LTE dialup modems for WAN Failover.

SonicOS supports WAN connections using 4G/LTE Wireless modems over Cellular data networks.

  •   Support WAN failover for when the primary WAN has failed.
  •   Support mobile networks where primary wire-based WAN connection is not available, such as in a trade show or kiosks or in a vehicle.
  •   Supported 4G/LTE modem list - Wireless Broadband Cards and Devices (Wireless WAN USB devices) supported
  •   Support for USB dialup modem for locations with no cellular data service or where cellular data is expensive.

Gen6 firewalls that support 4G/LTE:

  • SOHO
  • TZ300-TZ600
  • NSA 2650
  • NSA 3600-NSA 6600

Image

 

Resolution

  1. Choose a USB cellular modem and plug modem into the Sonicwall USB port.
  2. The Sonicwall’s approved list is Wireless Broadband Cards and Devices (Wireless WAN USB devices) supported. For example, in the US an approved USB 3G modem is the Sprint U760.

  3. Go to the 3G/4G modem Setting page, for the 3G/4G Device Type select 3G/4G/Mobile and click the Accept button

 

 

Image

Go to the 3G/4G | Connection Profiles page

      • Click on the Add button to add a connection profile.

      • Create profiles for various devices that maybe used such as USB 3G/4G cellular modems or USB analog modems.

        Image
  1. Go to 3G/4G/Modem | Status page.
    • When there is no modem installed, the status is No device was detected.
    • When a supported 3G/4G modem is detected, it will show the modem’s information

Dialup Modem Selection

In some environments there are no cellular data networks or maybe the cellular data network is not reachable, to handle this scenario SonicOS has support for an USB analog modem. The configuration to set up a secondary WAN connection using an analog phone line is similar to a 3G/4G connection.

  • Go to the 3G/4G modem Setting page:
    • For the 3G/4G Device Type select Analog Modem and click the Accept button.
    • The Modem Settings will show up.
    • Under Modem Settings, select the country in Initialize Modem Connection For Use in: or enter the modem commands in Initialize Modem Connection Using AT commands. Select the Accept button to initialize the modem.

Now you will configure the Interface Settings:

  1. Check the USB interface in Network | Interfaces.

     NOTE: On some Sonicwall appliances there are two USB ports, a modem connected on the USB port 1 will show up as interface U0 (Top) and a modem connected on USB port 2 will show up as interface U1(Bottom). If there is only one USB port then the modem will show up as U0 only.

    Image 

 NOTE: The Information marked as "Auto Populate" are usually auto-populated. However this depends on your profile data from your provider. The information above are only used as example.

 
56K profile example

In the 56K modem profile, under General Setting, enter the service provider dialup access account numbers (primary and secondary). The User Name is a user on the SonicWall appliance and its password.

Image

 Configure Primary WAN to Secondary WAN Failover

  1. Go to the Network | Failover & LB page
  2. Load Balancing and Respond to Probes is on by default. A default group called Default LB Group was already created. Click Configure button to edit.
    • In Type select Basic Failover: it will fail over to the backup when the primary is down.
    • Accept the default for Preempt and fail back to preferred interface when possible: this selection allows the WAN to fail back the primary WAN when possible.
    • Select the default WAN interface and add it under Interface Ordering, for SonicWall the X1 interface is the default WAN. Select 3G/4G modem (U0 or U1 interface) and add it under Final Back-Up: this tells SonicOS that the default WAN interface X1 is the primary WAN and if the primary WAN is down, use the Secondary backup WAN interface U0/U1. The Preempt and fallback check box tells SonicOS to fall back to the primary WAN when possible.
  3. Demonstrate Primary WAN to Secondary WAN (U0/U1) failover.

In this example, both the Primary WAN and Secondary WAN are connected. Since the Primary WAN has priority, it is chosen over the Secondary WAN. In this example, Primary WAN X1 has IP 10.50.20.108 and the Secondary WAN has IP of 0.0.0.0

Image

  • Look at a trace route to www.yahoo.com. The Sonicwall is using the Primary WAN X1 because the Secondary WAN has a lower priority and is down.
  • Now bring down the Primary WAN, takes a moment for the Secondary WAN to enable. When the Secondary WAN is up, a trace route to www.yahoo.com goes out using the Secondary WAN interface U0
  • Look at a trace route to www.yahoo.com again. Now the Sonicwall is using the Secondary WAN to go through
  • Sprint Cellular.
    https://www.sonicwall.com/en-us/support/knowledge-base/170505961420387

Author: Angelo A Vitale
Last update: 2018-12-17 19:16


Configuring the DMZ / OPT Interface in NAT Mode

Description

 

You can configure the OPT interface in either Transparent Mode or NAT Mode:

  • NAT Mode translates the private IP addresses of devices connected to the OPT interface to a single, static IP address. By default, the OPT interface is configured in NAT Mode. When configuring the DMZ in NAT mode you must use a different subnet than the one specified for the LAN. (e.g LAN = 192.168.168.0, then DMZ = 10.1.1.1)


  • Transparent Mode enables the SonicWall security appliance to bridge the OPT subnet onto the WAN interface. It requires valid IP addresses for all computers connected to the OPT interface on your network, but allows remote access to authenticated users. You can use the OPT interface in Transparent mode for public servers and devices with static IP addresses you want visible outside your SonicWall security appliance-protected network.

 

Resolution

Here's how to Configure DMZ in NAT Mode:

  1. Click on Network | Interfaces.
  2. Click the Notepad icon in the Configure column for the Unassigned Interface you want to configure. The Edit Interface window is displayed.
    Image
  3. Select the DMZ in the dropdown next to Zone.
  4. Choose Static in the IP Assignment dropdown menu.
  5. Type the Private IP address, which is in a different subnet than that of the LAN. The DMZ IP address should be the gateway for the computers connected to the DMZ.
  6. Enter any optional comment text in the Comment field. This text is displayed in the Comment column of the Interface table.
  7. If you want to enable remote management of the SonicWall from this interface, select the supported management protocol(s): HTTP or HTTPS (either or both). Ping, SNMP and/or SSH are optional protocols that can also be enabled.
  8. If you want to allow selected users with limited management rights to log in to the security appliance, select HTTP and/or HTTPS in User Login.
  9. Click OK to save changes.

Resolution for SonicOS 6.5 and Later

SonicOS 6.5 was released September 2017. This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 and later firmware.

Here's how to Configure DMZ in NAT Mode:

  1. Click on Manage > System Setup > Network | Interfaces.
  2. Click the Notepad icon in the Configure column for the Unassigned Interface you want to configure. The Edit Interface window is displayed.

Image

3. Select the DMZ in the dropdown next to Zone.

4. Choose Static in the IP Assignment dropdown menu.

5. Type the Private IP address, which is in a different subnet than that of the LAN. The DMZ IP address should be the gateway for the computers connected to the DMZ.

6. Enter any optional comment text in the Comment field. This text is displayed in the Comment column of the Interface table.

7. If you want to enable remote management of the SonicWall from this interface, select the supported management protocol(s): HTTP or HTTPS (either or both). Ping, SNMP and/or  SSH are optional protocols that can also be enabled.

8. If you want to allow selected users with limited management rights to log in to the security appliance, select HTTP and/or HTTPS in User Login.

9. Click OK to save changes.


https://www.sonicwall.com/en-us/support/knowledge-base/170504748150102

Author: Angelo A Vitale
Last update: 2018-12-17 22:24


How to Enable Port Forwarding and Allow Access to a Server Through the SonicWall

This article describes how to access an Internal device or server behind the SonicWall firewall. This process is also known as opening ports, NAT, and Port Forwarding.

For this process the device that is attempting to be accessed can be any of the following:

  • Web Server
  • FTP Server
  • Email Server
  • Terminal Server
  • DVR (Digital Video Recorder)
  • PBX
  • SIP Server
  • IP Camera
  • Printer
  • Application Server
  • Any custom Server Roles
  • Game Consoles

 

Don't want to read? Watch instead!

Cause

 By default the SonicWall disallows all Inbound Traffic that isn't part of a communication that began from an internal device, such as something on the LAN Zone. This is to protect internal devices from malicious access, however it is often necessary to open up certain parts of a network, such as Servers, to the outside world.

To accomplish this the SonicWall needs a Firewall Access Rule to allow the traffic from the public Internet to the internal network as well as a Network Address Translation (NAT) Policy to direct the traffic to the correct device.

 Resolution

Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using SonicOS involves the following steps:

  1. Creating the necessary Address Objects
  2. Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback
  3. Creating the necessary Firewall Access Rules

These steps will also allow you to enable Port Address Translation with or without altering the IP Addresses involved.

 TIP: The Public Server Wizard is a straightforward and simple way to provide public access to an internal Server through the SonicWall. The Public Server Wizard will simplify the above three steps by prompting your for information and creating the necessary Settings automatically.

Click Quick Configuration in the top navigation menu.

You can learn more about the Public Server Wizard by reading How to open ports using the SonicWall Public Server Wizard.

 CAUTION: The SonicWall security appliance is managed by HTTP (Port 80) and HTTPS (Port 443), with HTTPS Management being enabled by default. If you are using one or more of the WAN IP Addresses for HTTP/HTTPS Port Forwarding to a Server then you must change the Management Port to an unused Port, or change the Port when navigating to your Server via NAT or another method.

Image

Scenario Overview

The following walk-through details allowing HTTPS Traffic from the Internet to a Server on the LAN. Once the configuration is complete, Internet Users can access the Server via the Public IP Address of the SonicWall's WAN. Although the examples below show the LAN Zone and HTTPS (Port 443) they can apply to any Zone and any Port that is required. Similarly, the WAN IP Address can be replaced with any Public IP that is routed to the SonicWall, such as a Public Range provided by an ISP.

 TIP: If your user interface looks different to the screenshot in this article, you may need to upgrade your firmware to the latest firmware version for your appliance. To learn more about upgrading firmware, please see Procedure to Upgrade the SonicWall UTM Appliance Firmware Image with Current Preferences.


Step 1: Creating the necessary Address Objects

  1. Log into the SonicWall GUI.
  2. Click Manage in the top navigation menu.
  3. Click Objects | Address Objects.
  4. Click the Add a new Address object button and create two Address Objects for the Server's Public IP and the Server's Private IP.
  5. Click OK to add the Address Object to the SonicWall's Address Object Table.

Image

Step 2: Creating the necessary Service Object

  1. Click Manage in the top navigation menu
  2. Click Objects | Service Objects.
  3. Click the Add a new Service object button and create the necessary Service Objects for the Ports required.
  4. Ensure that you know the correct Protocol for the Service Object (TCP, UDP, etc.). If you're unsure of which Protocol is in use, perform a Packet Capture.
  5. Click OK to add the Service Object to the SonicWall's Service Object Table.

Image

Step 3: Creating the appropriate NAT Policies which can include Inbound, Outbound, and Loopback


A NAT Policy will allow SonicOS to translate incoming Packets destined for a Public IP Address to a Private IP Address, and/or a specific Port to another specific Port. Every Packet contains information about the Source and Destination IP Addresses and Ports and with a NAT Policy SonicOS can examine Packets and rewrite those Addresses and Ports for incoming and outgoing traffic.

  1. Click Manage in the top navigation menu.
  2. Click Rules | NAT Policies.
  3. Click the Add a new NAT Policy button and a pop-up window will appear.
  4. Click Add to add the NAT Policy to the SonicWall NAT Policy Table.

Note: When creating a NAT Policy you may select the "Create a reflexive policy" checkbox. This will create an inverse Policy automatically, in the example below adding a reflexive policy for the NAT Policy on the left will also create the NAT Policy on the right. This option is not available when configuring an existing NAT Policy, only when creating a new Policy.

Image
Loopback NAT Policy

A Loopback NAT Policy is required when Users on the Local LAN/WLAN need to access an internal Server via its Public IP/Public DNS Name. This Policy will "Loopback" the Users request for access as coming from the Public IP of the WAN and then translate down to the Private IP of the Server. Without a Loopback NAT Policy internal Users will be forced to use the Private IP of the Server to access it which will typically create problems with DNS.

If you wish to access this server from other internal zones using the Public IP address Http://1.1.1.1 consider creating a Loopback NAT Policy:

  • Original Source: Firewalled Subnets
  • Translated Source: X1 IP
  • Original Destination: X1 IP
  • Translated Destination: Example Name Private
  • Original Service: HTTPS
  • Translated Service: Original
  • Inbound Interface: Any
  • Outbound Interface: Any
  • Comment: Loopback policy
  • Enable NAT Policy: Checked
  • Create a reflexive policy: Unchecked

Image
Step 3: Creating the necessary Firewall Access Rules

  1. Click Manage in the top navigation menu.
  2. Click Rules | Access Rules.
  3. Select the View Type as Matrix and select your WAN to Appropriate Zone Access Rule. (This will be the Zone the Private IP of the Server resides on.)
  4. Click the Add a new entry/Add... button and in the pop-up window create the required Access Rule by configuring the fields as shown below.
  5. Click Add when finished.

Image


https://www.sonicwall.com/en-us/support/knowledge-base/170503477349850

Author: Angelo A Vitale
Last update: 2018-12-18 06:25


How to Open ports on the Firewall using the Quick Configuration

Description

This article explains how to open ports on the SonicWall for the following options:

- Web Services

- FTP Services

- Mail Services

- Terminal Services

- Other Services

Resolution

Consider the following example where the server is behind the firewall. This is the server we would like to allow access to.

- The Firewall's WAN IP is 1.1.1.1

- The server's private IP is 192.168.1.100

- We would like to NAT the server IP to the firewall's WAN IP (1.1.1.1)

To allow access to the server, select the QUICK CONFIGURATION option from the top of the page on the web GUI. This opens up the configuration dialog

Image

 

  • Select Public Server Guide in the following dialog

Image

  • The following options are available in the next dialog
  • Web Services: Allows HTTP (TCP port 80) and HTTPS (TCP port 443)
  • FTP Services: Allows TCP port 21
  • Mail Services: Allows SMTP (TCP port 25), POP3 (TCP port 110) and IMAP (TCP port 143)
  • Terminal Services: Allows RDP (TCP port 3389) and Citrix ICA (TCP port 1494)
  • Other Services: You can select other services from the drop-down list

Image

  • In the following dialog, enter the IP address of the server. This is similar to creating an Address Object. For out example, the server IP will be 192.168.1.100

Image

  • The next dialog requires the public IP of the server. Predominantly, the private IP is NAT'ed to the SonicWall's WAN IP, but you can also enter a different public IP address if you would like to translate the server to a different IP. For our example, the IP address is 1.1.1.1

Image

  • The following dialog lists the configuration that will be added once the wizard is complete.

Image

  • Select Apply to complete the process.

You can verify if the rules and NAT policies have been created by checking under Manage | Policies | Rules | Access Rules | NAT Policy (as shown below)

Image

Image


https://www.sonicwall.com/en-us/support/knowledge-base/170503853090538

Author: Angelo A Vitale
Last update: 2018-12-18 06:27


Contact Form 7 Code

[simple_tooltip content='You must accept the terms and conditions before sending your message.']Privacy Policy[/simple_tooltip]

[acceptance consent-checkbox]I understand that this form collects my name and email so I can be contacted. For more information, please check our https://example.com/privacy/”>privacypolicy.”[/acceptance]

acceptance_as_validation: on


.wpcf7-form-control.wpcf7-acceptance label {
display: flex;
line-height: 1.2;
}

.wpcf7-form-control.wpcf7-acceptance input[type="checkbox"] {
width: auto;
margin-top: 0;
margin-right: 5px;
}

.wpcf7-form-control.wpcf7-acceptance .wpcf7-list-item {
margin-left: 0;
}

.wpcf7-submit {
display: block;
margin: 0 auto;
}

Author: Angelo A Vitale
Last update: 2018-12-11 11:27


Contact Form 7 - Additional Settings

Additional Settings

You can include additional settings to each contact form by adding code snippets in the specific format into the Additional Settings field in the contact form’s edit screen.


By default, Contact Form 7 supports the following types of settings.

Subscribers-Only Mode

subscribers_only: true

You may want to ensure that only logged-in users can submit your contact form. In such cases, use the subscribers-only mode. In this mode, non-logged-in users can’t submit the contact form and will see a message informing them that login is required, while logged-in users can use it as usual.

No anti-spam verification will be provided for contact forms in the subscribers-only mode since only welcome people are supposed to be able to use them. If this assumption is not applicable to your site, subscribers-only mode probably isn’t a good option for you.

Demo Mode

demo_mode: on

If you set demo_mode: on in the Additional Settings field, the contact form will be in the demo mode. In this mode, the contact form will skip the process of sending mail and just display “completed successfully” as a response message.

Acceptance as Validation

acceptance_as_validation: on

By default, an acceptance checkbox behaves differently from other types of fields; it does not display a validation error message even when the box is not checked. If you set acceptance_as_validation: onin the Additional Settings field, acceptance checkboxes in the contact form behave in the same way as other form fields.

For details, see Acceptance Checkbox.

Flamingo Settings

You can customize the Subject and From field values shown in the admin menu of Flamingo. For more details, see Save Submitted Messages with Flamingo.

Suppressing Message Storage

do_not_store: true

This setting tells message storage modules, such as Flamingo, not to store messages through this contact form.

JavaScript Code

on_sent_ok: "alert('sent ok');"
on_submit: "alert('submit');"

If you set on_sent_ok: followed by a one-line JavaScript code, you can tell the contact form the code that should be performed when the mail is sent successfully. Likewise, with on_submit:, you can tell the code that should be performed when the form submitted regardless of the outcome.

See also: Tracking Form Submissions with Google Analytics and Redirecting to Another URL After Submissions

Note: on_sent_ok and on_submit are deprecated and scheduled to be abolished by the end of 2017. You can use DOM events instead of these settings.

Author: Angelo A Vitale
Last update: 2018-12-11 11:28


Contact Form 7 - Getting Started

Displaying a Form

Let’s start with displaying a form on your page. First, open the ‘Contact’ > ‘Contact Forms’ menu on your WordPress administration panel. You can manage multiple contact forms there.

Screenshot of Contact Form 7's Admin Screen

Just after installing the Contact Form 7 plugin, you’ll see a default form named “Contact form 1”, and a code like this:

[contact-form-7 id="1234" title="Contact form 1"]

Copy this code. Then, open the edit menu of the page (‘Pages’ > ‘Edit’) into which you wish to place the contact form. A popular practice is creating a page named “Contact” for the contact form page. Paste the code you copied into the contents of the page.

Now your contact form setup is complete. Visitors to your site can now find the form and start submitting messages to you.

Next, let’s see how you can customize your form and mail content.


Customizing a Form

You may feel that the default form is too simple for you and you want to add more fields to it. You can edit the form template in the admin screen and add other fields.

To add fields to a form, make tags for them and insert them into the ‘Form’ field. You’ll find unfamiliar codes in the ‘Form’ field, for example, [text* your-name]. These codes are called “tags” in the vocabulary for Contact Form 7.

A tag has a rather complex syntax, but don’t worry! You don’t have to learn it. You can use the “Generate Tag” tool to generate as many tags as you want.

The second word in the tag is its name. For example, the name of [text* your-name] is ‘your-name.’ This name is important as it is used later in your mail template.

Customizing Mail

You can edit mail templates in the ‘Mail’ field set as you did with the form template. You can use tags there as well, but note that tags for mail are different from those tags for forms.

Tags you can use in a mail template contain only one word in brackets and look like [your-name]. You should be aware that this ‘your-name’ is the same as the name of the form tag which is noted in the previous example. The two tags correspond with the same name.

In mail, [your-name] will be replaced by the user’s input value, which is submitted through the corresponding form field, which, in this case, is [text* your-name].

Author: Angelo A Vitale
Last update: 2018-12-11 11:29


Contact Form 7 - How Tags Work

How Tags Work

Contact Form 7 allows you to edit the templates of your contact forms and your mail (mail headers and message body) with various “tags.” In the terminology for Contact Form 7, tag means a tiny formed string of type enclosed in square brackets ([ ]).


Tags for forms and tags for mail look different from each other, for example, you can use [text* your-name] in your form and [your-name] in your mail. They each have a different syntax.

Form-tag Syntax

A tag in a form template (“form-tag”) will be replaced with an HTML element which represents an input field when it is displayed in an actual form. Components of a form tag are able to be separated into four parts: type, name, options and values.

form tag example

Type is the most important factor, as it defines what type of HTML element will replace itself, and what kind of input is expected through it.

Name is used for identifying the input field. Most form tags have a name, but there are exceptions.


Options specify details of behavior and appearance. Options are optional.

In most cases, values are used for specifying default values. It is possible that values can be used for other purposes as well; it depends on the type of the tag. Values are optional.

Note that order of those parts is important. Options can’t come before name, and Values can’t come before Options.

List of Form-tag Types

Mail-tag Syntax

A tag in a mail template (“mail-tag”) is much simpler. A mail tag has only one word in it. In most cases, the word corresponds to the name of a form tag, and it will be replaced with the form input through it.

mail tag example

Author: Angelo A Vitale
Last update: 2018-12-11 11:30


Contact Form 7 Admin Screen

Admin Screen

Screenshot image of the editor screen 1

Title of this contact form (❶). This title is just a label for a contact form and is used only for administrative purposes. You can use any title you like, e.g. “Job Application Form,” “Form for Event 2014/02/14″ and so on.

Shortcode for this contact form (❷). Copy this code and paste it into your post, page or text widget content where you want to place this contact form.

You can save, duplicate or delete this contact form here (❸).

Form Tab

Screenshot image of the editor screen 2

Form editing field (❷). You can customize form content here using HTML and form-tags. Line breaks and blank lines in this field are automatically formatted with 
and HTML tags.

Tag generators (❶). By using these tag generators, you can generate form-tags without knowledge of them.

For more about form-tags, see How Tags Work.

Mail Tab

Screenshot image of the editor screen 3

You can edit a mail template for mail that is to be sent as a result of a form submission. You can use mail-tags in these fields.

Mail(2) template, which is an additional mail template that can have different contents from the primary Mail template, is also available.

For more information, see Setting Up Mail.

Messages Tab

Screenshot image of the editor screen 4

You can edit messages that are used for various situations, including “Validation errors occurred,” “Please fill in the required field,” etc.

Note that only plain text is available here. HTML tags and entities are not allowed to use in the message fields.

Additional Settings Tab

Screenshot image of the editor screen 5

You can add customization code snippets here. For details, see Additional Settings.

Author: Angelo A Vitale
Last update: 2018-12-11 11:30


Comodo

Enroll Mac OS Endpoints

  • After you have completed the setup process, Endpoint Manager will send an email to your users containing device enrollment instructions.
  • Users should open the mail on the device itself.

There are two steps to enroll a Mac OS device:

 

Step 1 - Install the EM Configuration Profile

  • Open the enrollment mail on the device you want to add
  • Click the link in the mail to open the device enrollment page
  • Next, click the first link under 'For Mac OS Devices' as shown below:

 

 

This will download and install the configuration file 'cdm.mobileconfig':


 

  • Click 'Install'.
  • You need to enter your device username and password to continue the installation:


 

 

  • After you have logged in, a confirmation dialog appears for the profile installation:


 

  • Click 'Show Details' if you want to view information about the profile
  • Click 'Continue'


 

  • Click 'Install'

The profile will be installed.

 

 

Step 2 - Install the EM Communication Client

 

After installing the profile, you need to install the communication client so the device can communicate with Endpoint Manager.

 

Download and install the communication client

  • Go back to the device enrollment page and click the 2nd link under 'For Mac OS devices':


 

 

This will start the communication client setup process:

 

 

  • Click 'Continue' 

 

The next step is to choose the location to install the client:

 

 

  • Click the disk icon if you want to choose a different install location. Click 'Continue' when you are ready.

The next step lets you choose the installation type and start the installation.

 



  • Click 'Install'

You need to enter your device password to allow the installation:

 

 

  • Enter your username and password and click 'Install Software'

 

 

 

Once installation is complete, the client will connect to the EM server:


 

 

The device is now enrolled and can be remotely managed from Endpoint Manager.

 

The next step is to install Comodo Client Security for Mac (CCS) on the endpoint. See Remotely Install Packages on Mac OS Devices for help to do this.

  • If no profiles are defined for the user, then the default profiles for Mac OS are applied. See Manage Default Profiles for more details.

Author: Angelo A Vitale
Last update: 2018-12-30 00:28


How to move devices to different company and to add the device into the group

Release Time
04/19/2018
Views
1720 times
Category
Devices
Tags

Introduction:

Endpoint Manager has been concentrating to expand options to make more feasible jobs that save your time chart. This wiki can help you to change the device from one company to another and, add the device to the group. When the device is configured under the specific company, We can change that selected devices to another company as per your requirement.

NOTE:

  • It's recommended to change the ownership of the device which going to be checked in to other group or company. Kindly refer STEP 2.

Step 1Go to Endpoint Manager --> "Device List

  • Select the devices which you want to configure to the new company.

 

Step 2: Then Click the "Owner" --> "Change Owner" 

  • Type the owner username to search among users and select the appropriate user from the intended company and click "Change".
  • Thus the company of the device is changed to another company.

Step 3: Adding devices to other groups. Favorably we have two methods to achieve it.

METHOD 1: TO ADD THE DEVICE INTO THE GROUP THROUGH DEVICE MANAGEMENT:

  • Go to "Device List" --> Click the Device name in which you want to add the group.

  • Select the "Groups" tab --> Click "Add to Group".Enter the group name and click "add". Thus the device is added to the group.

Note: If there are no groups available in the device, It will set default group of its company.

 

METHOD 2: TO ADD THE DEVICE INTO THE GROUP THROUGH GROUP MANAGEMENT :

 

  • Go to "Device list → Group Management". The left panel shows the list of " Companies " which has Groups in it. The right panel shows all available " Groups ".
  • Expand a company and select a group where want to add the devices. Available devices under selected group will be shown at the right panel. Hit " Add Device to Group " button at the top right.
  • List of available devices under other groups in the same company will be in your view.

 

 

 

Author: Angelo A Vitale
Last update: 2018-12-30 00:33


Adding External Contacts to a Distribution Group – Office 365

First, use this post to add the External Contacts. Adding Exteral Contacts to Exchange Online

If you need to create the Distribution List, use this post Creating a Distribution Group in Office 365 – Exchange Online

Follow the steps below to add the external contact to a Distribution Group.

Login to the Office 365 Admin Center


Click the App Launcher (top left corner)


Click Admin





Open Exchange Admin Center





Click Recipients


Click Groups


Select the Distribution Group

Click the Edit Pencil  (A new window will open)




Select Membership


Click  (A new window will open)



Double click the contact or select the external contact and click add ->


Click OK




The contact will now be in the list of members

Click Save

http://office365support.ca/adding-external-contacts-to-a-distribution-group/

Author: Angelo A Vitale
Last update: 2018-12-11 11:57


Create distribution lists in the Office 365 admin center

Used when you want to send email to group of people without having to type each individual recipient's name, distribution lists are organized by a particular discussion subject (such as “Marketing”) or by users who share common work that requires them to communicate frequently. They also provide a way for you to automatically forward email to multiple email addresses.

Distribution lists are sometimes called distribution groups.


Create a distribution list (group)

  1. Go to the Click here to go to the Office 365 admin center. .
  2. Select the app launcher icon  and choose Admin.

    Can't find the app you're looking for? From the app launcher, select All apps to see an alphabetical list of the Office 365 apps available to you. From there, you can search for a specific app.
  3. Choose Groups in the left navigation pane.

    See your new Office 365 groups in the admin center preview
  4. Under Type of group, select the dropdown and choose Distribution list.

    Add a group page - Choose the dropdown and choose distribution list
  5. Enter a name and add a description for your new distribution list.

    You can choose whether you want people outside your organization to send email to the distribution list.
  6. When you're ready, click or tap Add to create the distribution list, and Close to view your distribution list.
  7. To add users to your distribution list, see Add a user or contact to an Office 365 distribution list.

Check out how to use distribution lists in Outlook 2016 and Outlook on the web in the Use contact groups (formerly distribution lists) in Outlook topic.

Check out Troubleshooting distribution list issues for help with distribution list issues.


https://support.office.com/en-us/article/create-distribution-lists-in-the-office-365-admin-center-b1ffe755-59e5-4369-826d-825f145a8400

Author: Angelo A Vitale
Last update: 2018-12-11 11:58


I can’t start Microsoft Outlook 2016, 2013, or 2010 or receive the error “Cannot start Microsoft Office Outlook. Cannot open the Outlook Window

I can’t start Microsoft Outlook 2016, 2013, or 2010 or receive the error “Cannot start Microsoft Office Outlook. Cannot open the Outlook Window”

Author: Angelo A Vitale
Last update: 2018-12-11 12:00


Repair Outlook Data Files (.pst and .ost)

epair Outlook Data Files (.pst and .ost)

Applies To: Outlook for Office 365 Outlook 2016 Outlook 2013 Outlook 2010 Outlook 2007
If your Outlook Data File (.pst or .ost) won't open, if you receive an error message that Outlook can't open this set of folders, or if you suspect the file is damaged, you can use the Inbox Repair tool (SCANPST.EXE) to diagnose and repair errors in the data file. The Inbox Repair tool checks the Outlook data files on your computer to see if they're in good shape.

If you're using an Exchange email account, you can delete the offline Outlook Data File (.ost) and Outlook will recreate the offline Outlook Data File (.ost) the next time you open Outlook.

Note: The Inbox Repair tool doesn't connect or analyze any data stored in an Exchange mailbox. The tool only looks for errors (corruption), and there are any, gives you the opportunity to allow the tool to fix those errors


Repair an Outlook data file (.pst) file

  1. Exit Outlook and browse to one of the following file locations:

    • Outlook 2016: C:\Program Files (x86)\Microsoft Office\root\Office16
    • Outlook 2013: C:\Program Files (x86)\Microsoft Office\Office15
    • Outlook 2010: C:\Program Files (x86)\Microsoft Office\Office14
    • Outlook 2007: C:\Program Files (x86)\Microsoft Office\Office1
  2. Open SCANPST.EXE.
  3. Select Browse to select the Outlook Data File (.pst) you want to scan. If you need help locating your Outlook Data File, see Locating the Outlook Data Files.

    Note: By default, a new log file is created during the scan. You can choose Options and opt not to have a log created, or you can have the results appended to an existing log file.

  4. Choose Start to begin the scan.
  5. If the scan finds errors, choose Repair to start the process to fix them.

    Shows results of scanned Outlook .pst data file using the Microsoft Inbox Repair tool, SCANPST.EXE

    Note: The scan creates a backup file during the repair process. To change the default name or location of this backup file, in the Enter name of backup file box, enter a new name, or choose Browse to select the file you want to use.

  6. When the repair is complete, start Outlook with the profile associated with the Outlook Data File you just repaired.

What happens after you repair an Outlook Data File?

In the Folder Pane, you might see a folder named Recovered Personal Folders that contains your default Outlook folders or a Lost and Found folder. Although the repair process might recreate some of the folders, they may be empty. The Lost and Found folder contains any folders and items recovered by the repair tool that Outlook can't place in their original structure.

Create new data file

You can create a new Outlook Data File and drag the items in the Lost and Found folder into the new data file. After you've moved all the items, you can remove the Recovered Personal Folders (.pst) file, including the Lost and Found folder.

Recover items from the backup (.bak) file

If you can open the original Outlook Data File, you might be able to recover additional items. The Inbox Repair tool creates a backup file with the same name as the original, but with a .bak extension, and saves it in the same folder. There may be items in the backup file that you might be able to recover that the Inbox Repair tool couldn't.

  1. Browse to the folder where the .pst file is stored and you'll find the .bak file (for example, kerimills01@outlook.com.bak) created by the Inbox Repair tool.
  2. Make a copy of the .bak file and rename it with a bak.pst extension. For example, kerimills01@outlook.com.bak.pst.
  3. Import the bak.pst file into Outlook, and use the Import and Export Wizard to import any additional recovered items into the newly created .pst file.

    Note: Learn how to import a .pst file by reading Import email, contacts, and calendar from an Outlook .pst file.

Locating the Outlook Data Files

You can check the location of your Outlook Data Files in Outlook.

  1. Select File > Account Settings > Account Settings.

  2. Select the Data Files tab.

  3. All Outlook Data Files (.pst) and Offline Data Files (.ost) are listed along with the name of the account the files are associated with.

Author: Angelo A Vitale
Last update: 2018-12-11 12:01


GSuite and Office 365 Informational email

Gsuite is a mail server that requires a unique domain, and it will update the MX records automatically


If you set it up Gsuite using the nardonebros.com, that is why you are having email issues. the mail never leaves the gmail server, because it does not know the hot igloo server exists. 



I will have your mail issues resolved soon, please review options


Option 1: (Office 365 email / optional dual email delivery)

Create a full office 365 user (50 gb of exchange 365 email, and 5 office installs) and send the mail to both your office 365 account and another email account at the same time.


Option 2: (Configure GSuite with new domain name)
Create an office 365 user with the 5 installs of office only, (no Email)
Purchase a second domain name for $30.00 a year and configure G-Suite (example: nardonepizza.com, nardonebros.net)
Create a contact on the microsoft exchange server that forwards vinnie@nardonebros.com to the vinnie@nardoneotherdomain.com
(note your emails will come from the second domain, but you will receive email from both domains)


Option 3: (Office 365 email and & configure GSuite)
Create a full office 365 user (50 gb of exchange 365 email, and 5 office installs) and send the mail to both your office 365 account and another email account at the same time.
Purchase a second domain name for $30.00 a year and configure G-Suite (example: nardonepizza.com, nardonebros.net)
(note your emails will come from the second domain, but you will receive email from both domains)

The secure email will work on your phone and in chrome
https://chrome.google.com/webstore/detail/cipherpost-pro/jjklfihngajdehchifejpbepdgbbejml?hl=en

Please let me know what Nardone related Apps you are using in Gsuite.

Author: Angelo A Vitale
Last update: 2018-12-11 12:34


Overview of importing your organization's PST files to Office 365

Overview of importing your organization's PST files to Office 365

Applies To: Office 365 Admin
Step-by-step instructionsHow it worksFAQsMore info
See one of the following topics for detailed, step-by-step instructions for bulk-importing your organization's PST files to Office 365.

Cloud upload Use network upload to import PST files to Office 365

Hard disk Use drive shipping to import PST files to Office 365

Tip: The previous topics are for administrators. Are you trying to import PST files to your own mailbox? See Import email, contacts, and calendar from an Outlook .pst file


https://support.office.com/en-us/article/overview-of-importing-your-organization-s-pst-files-to-office-365-ba688e0a-0fcb-4bd7-8e57-2b669564ea84

Author: Angelo A Vitale
Last update: 2018-12-11 12:48


Create an Outlook Data File (.pst) to save your information

Applies To: Outlook 2016 Outlook 2013
Last updated 2017-06-05

  1. From the Inbox, select New Items > More Items> Outlook Data File.
  2. Enter a File name .
  3. To add a password, check the Add Optional Password box.
  4. Select OK. Type a password in both the Password and Verify Password text boxes and select OKagain.

    If you set a password, you must enter it every time that the data file is opened — for example, when Outlook starts or when you open the data file in Outlook.


Create a new Outlook data file


About Outlook Data Files (.pst and .ost)

When you run Outlook for the first time, the necessary data files are created automatically.

Sometimes additional data files are needed. For example, older messages and items that you don’t use regularly can be archived to an Outlook Data File (.pst). Or, if your online mailbox is near your storage quota, you could move some items at an Outlook Data File (.pst).

Outlook Data Files (.pst) are saved on your computer in the Documents\Outlook Files folder.

An Outlook Data File (.pst) is used for POP3 email accounts. Additionally, when you want to create archives or backup files from any of your accounts in Outlook, Outlook Data Files (.pst) are used.

Some accounts use an offline Outlook Data File (.ost). This is a synchronized copy of the messages saved on a server and that can be accessed from multiple devices and applications such as Outlook. These accounts include IMAP, Microsoft Exchange Server, and Outlook.com accounts.

Offline Outlook Data Files are saved in the drive:\Users\user\AppData\Local\Microsoft\Outlook folder. It isn’t necessary to back up an offline Outlook Data File (.ost) as it is a copy of the information on the server. If you set up the account again or on another computer or device, a synchronized copy of your messages are downloaded from the server.

Author: Angelo A Vitale
Last update: 2018-12-15 11:11


NK2 Outlook Auto Complete List

NK2 Outlook Auto Complete List
Last Updated 2 months ago

Import or copy the Auto-Complete List to another computer

Applies To: Outlook 2010 Outlook 2007
The Auto-Complete List is a feature that displays suggestions for names and email addresses as you begin to type them. These suggestions are possible matches from a list of names and email addresses from the email messages that you have sent.

Auto-Complete list


In Microsoft Outlook 2010, the Auto-Complete List is no longer saved in a file with an extension of .nk2. The Auto-Complete List entries are now saved in your Microsoft Exchange Server mailbox or in the Outlook Data File (.pst) for your account. However, if you want to copy the Auto-Complete List (.nk2) from another computer that was using a POP3 email account or Outlook 2007, you must import the file.

Step 1: Copy the Auto-Complete file from the old computer

  1. Because the default folder is hidden folder, the easiest way to open the folder is to use the command %APPDATA%\Microsoft\Outlook on the Start menu.

    • Windows 7 Click Start. Next to the Shut downbutton, in the Search programs and files box, type %APPDATA%\Microsoft\Outlook and then press Enter.

      Windows 7 Start menu with Search box
    • Windows Vista Click Start. Next to the Shut Downbutton, in the Search box, type %APPDATA%\Microsoft\Outlook and then press Enter.

      Windows Vista Start button and Search box
    • Windows XP Click Start, click Run, type %APPDATA%\Microsoft\Outlook and then press Enter.

      Windows XP Start button and Run command
  2. After you press Enter, the folder in which your Auto-Complete List file is saved opens.

    NOTE: By default, file extensions are hidden in Windows. To change whether file extensions are shown, in Window Explorer on the Tools menu (in Windows 7 or Windows Vista, press the ALT key to see the Tools menu), click Folder Options. On the View tab select or clear the Hide extensions for known file types check box.
  3. Copy the file to the new computer. The file is small and can be placed on a removable media such as a USB memory stick.

Top of page

Step 2: Copy the Auto-Complete file to the new computer

  1. On the new computer, in Control Panel, click or double-click Mail.

    Mail appears in different Control Panel locations depending on the version of the Microsoft Windows operating system, the Control Panel view selected, and whether a 32- or 64-bit operating system or version of Outlook 2010 is installed.

    The easiest way to locate Mail is to open Control Panel in Windows, and then in the Search box at the top of window, type Mail. In Control Panel for Windows XP, type Mail in the Address box.

    NOTE: The Mail icon appears after Outlook starts for the first time.
  2. Click Show Profiles.
  3. Make a note of the name of the profile. You will need to change the .nk2 file name to match the name later.
  4. Copy the .nk2 file to the new computer in the folder in which Outlook configurations are saved. Because this folder is hidden folder, the easiest way to open the folder is to use the command %APPDATA%\Microsoft\Outlook on the Startmenu.

    • Windows 7 Click Start. Next to the Shut downbutton, in the Search programs and files box, type %APPDATA%\Microsoft\Outlook and then press Enter.
    • Windows Vista Click Start. Next to the Shut Downbutton, in the Search box, type %APPDATA%\Microsoft\Outlook and then press Enter.
    • Windows XP Click Start, click Run, type %APPDATA%\Microsoft\Outlook and then press Enter.
  5. After the file is coped to the folder, right-click the file, click Rename, and change the name to match the profile name that appeared in step 3.

Top of page

Step 3: Import the Auto-Complete List

You are now ready to start Outlook and import the file, but you must start Outlook with a special one-time command.

  • Do one of the following:

    • Windows 7 Click Start {picture}. Next to the Shut down button, in the Search programs and files box, type outlook /importnk2 and then press Enter.
    • Windows Vista Click Start {picture}. Next to the Shut Down button, in the Search box, type outlook /importnk2 and then press Enter.
    • Windows XP Click Start {picture}, click Run, type outlook /importnk2 and then press Enter.

The Auto-Complete List should now have the entries from your other computer when you compose a message and begin typing in the To, Cc, or Bccboxes.

Author: Angelo A Vitale
Last update: 2018-12-15 11:12


Using AD to Add an Alias to an Office 365 Email Account

If you are using Office 365 with Azure AD Connect (or the older DirSync) you know that some changes to accounts cannot be made via the O365 admin portal. For instance, if someone gets married and changes their name, you may wish to add a new email address for them. If you try to add an alias (second email address) to an account, you will get an error similar to this:



This error has made many people think they need to keep an Exchange Server up and running on their local network. Thankfully, that’s not the case. You can easily add an alias via Active Directory Users and Computers (ADUC).

To do this, open ADUC and find the User you want to modify. Make sure that Advanced Features is checked, under View on the top menu. Double click on the User then click on the Attribute Editor tab.





Scroll down to the Proxy Address field and double click to open it for editing. It may be blank, which is fine, or it may already have some information in it. If it’s blank your first step is to add the existing email account in the format SMTP:email@testemail.com. Make sure to capitalize SMTP as that’s how the default account is determined. For the alias account you want to add, use the format: smtp:aliasemail@testemail.com. You can add as many aliases as needed, just be sure that they all use lower case for smtp.

After entering the information, it should look something like this:



When done click OK until you are out of ADUC and then sit back and be patient. The cloud side will synchronize and show the new alias, but it isn’t always fast. You can do a manual sync via Azure AD Connect / DirSync, but even then it can take some time to appear on the O365 side of things.

Author: Angelo A Vitale
Last update: 2018-12-15 11:13


Ways to migrate multiple email accounts to Office 365

Applies To: Office 365 Admin Office 365 Small Business Admin
Your organization can migrate email to Office 365 from other systems. Your administrators can migrate mailboxes from an Exchange Server or migrate email from another email system. And your users can importtheir own email, contacts, and other mailbox information to an Office 365 mailbox created for them. Your organization also can work with a partner to migrate email.

Before you start an email migration, review limits and best practices for Exchange Online to make sure you get the performance and behavior you expect after migration.

See Decide on a migration path or Exchange migration advisors for help with choosing the best option for your organization.

You can also view an overview video:

Migrate mailboxes from Exchange Server

For migrations from an existing on-premises Exchange Server environment, an administrator can migrate all email, calendar, and contacts from user mailboxes to Office 365.

An administrator performs a staged or cutover migration to Office 365. All email, contacts, and calendar information can be migrated for each mailbox.There are three types of email migrations that can be made from an Exchange Server:

  • Migrate all mailboxes at once (cutover migration) or Express migration 

    Use this type of migration if you're running Exchange 2003, Exchange 2007, Exchange 2010, or Exchange 2013, and if there are fewer than 2000 mailboxes. You can perform a cutover migration by starting from the Exchange admin center (EAC); see Perform a cutover migration to Office 365. See Use express migration to migrate Exchange mailboxes to Office 365 to use the Express migration.

    IMPORTANT: With cutover migration, you can move up to 2000 mailboxes, but due to length of time it takes to create and migrate 2000 users, it is more reasonable to migrate 150 users or less.
  • Migrate mailboxes in batches (staged migration

    Use this type of migration if you're running Exchange 2003 or Exchange 2007, and there are more than 2,000 mailboxes. For an overview of staged migration, see What you need to know about a staged email migration to Office 365. To perform the migration tasks, see Perform a staged migration of Exchange Server 2003 and Exchange 2007 to Office 365.
  • Migrate using an integrated Exchange Server and Office 365 environment (hybrid

    Use this type of migration to maintain both on-premises and online mailboxes for your organization and to gradually migrate users and email to Office 365. Use this type of migration if:

    • You have Exchange 2010 and more than 150-2,000 mailboxes.
    • You have Exchange 2010 and want to migrate mailboxes in small batches over time.
    • You have Exchange 2013.
    For more information, see Plan an Exchange Online hybrid deployment in Office 365.

Use Office 365 Import Service to migrate PST-files

If your organization has many large PST files, you can use the Office 365 Import Service to migrate email data to Office 365.

An administrator migrates PST files to Office 365.You can use the Office 365 Import Service to either upload the PST files through a network, or to mail the PST files in a drive that you prepare.

For instructions, see Office 365 Import Service.

Migrate email from another IMAP-enabled email system

You can use the Internet Message Access Protocol (IMAP) to migrate user email from Gmail, Exchange, Outlook.com, and other email systems that support IMAP migration. When you migrate the user's email by using IMAP migration, only the items in the users' inbox or other mail folders are migrated. Contacts, calendar items, and tasks can't be migrated with IMAP, but they can be by a user.

IMAP migration also doesn't create mailboxes in Office 365. You'll have to create a mailboxfor each user before you migrate their email.

An administrator performs an IMAP migration to Office 365. All email, but not contacts or calendar information, can be migrated for each mailbox.To migrate email from another mail system, see Migrate your IMAP mailboxes to Office 365. After the email migration is done, any new mail sent to the source email isn't migrated.

Have users import their own email

Users can import their own email, contacts, and other mailbox information to Office 365. See Migrate email and contacts to Office 365 for Business to learn how.

A user can import email, contacts, and calendar information to Office 365.

Work with a partner to migrate email

If none of the types of migrations described will work for your organization, consider working with a partner to migrate email to Office 365.

Method

Description

Use third-party migration tools to migrate mailboxes to Office 365 Use a third-party email migration tool

Migration tools can help speed up and simplify email migration. You'll find a list of tools in the Office 365 Marketplace.

Hire a partner to help you deploy Office 365 Hire a partner to help migrate your email

You'll find a list of partners in the Office 365 Marketplace.

Author: Angelo A Vitale
Last update: 2018-12-15 11:14


Step-By-Step: Migration of Exchange 2003 Server to Office 365

While most of the focus happening this week was centered around end of support for Windows XP, many IT professionals also had Microsoft Exchange 2003 top of mind as it too had official support end on April 8th. Those currently running Exchange 2003 do have Office 365 as an option and preparation around a cut over migration might be in order. Cutover migrations are great as they take advantage of your existing setup and has no requirement for a hybrid server.

One point to take into consideration is that a cut over migration is not suitable for companies looking to implement single sign-on. During a cutover migration, the accounts are provisioned as cloud accounts and this will really wreak havoc on any implementation of single sign-on (for this reason, as soon as you setup single sign-on, cutover migrations are disabled on the portal site).

The process is really straightforward and works really well. With any successful migration, we need to plan and test before we implement. The following post will cover a cutover migration from a legacy Exchange 2003 system.

Step 1: Planning

Before we can attempt the migration, we need to know what we are going. Microsoft has done a great job of providing good quality information for administrators to use, to plan the migration to Office 365. I always use the Exchange Deployment Assistant as a guide for all my migrations. This site is up to date and will cover most of all the migrations scenarios to Office 365

  1. Open the Exchange Deployment Assistant site

 

  1. Once the site is launched, you are presented three options. Since I am doing a simple cutover migration from Exchange Server 2003, I am going to use the Cloud Only option

 

  1. Click Cloud Only

 

 

  1. You are now asked a series of questions on end state goals and existing setup

 

 

  1. Answer all the questions

 

  1. Click the next arrow

 

  1. The Exchange Deployment Assistant will generate a step by step guide for you to follow. Make sure to read and understand what you are doing.

 

 

Step 2: Testing the Existing Setup

 

Using our guide from the Exchange Deployment Assistant, we need to make sure that our Exchange 2003 infrastructure supports RPC over HTTP and Outlook Anywhere. Use the guide to verify the Exchange 2003 setup. Once the setup is verified to be correct, use the Microsoft Remote Connectivity Analyzer to verify RPC over HTTP and Outlook Anywhere.

  1. Open the Microsoft Remote Connectivity Analyzer site

 

  1. Select the Outlook Anywhere (RPC over HTTP) test 

 

  1. Click Next

 

  1. Enter all the information that is requested. Keep in mind that with Exchange 2003, using autodiscover to detect the settings will not work. Exchange 2003 doesn't support autodiscover. These values will have to be entered manually

 

 

  1. Enter the Verification code

 

  1. Click Perform Test

 

  1. The test will start

 

 

  1. Once the test is successful, you can continue to the next step. If it's successful with warnings, review the warnings and correct them if needed. I get a warning here, because I am using a multi-name UCC certificate. If the test fails, use the report generated and the guide (Exchange Deployment Assistant) to resolve the issues.

 

 

Step 3: Configure Cutover Migration

 

  1. Login to the Office 365 Admin Center

 

  1. Open Exchange Admin Center

 

 

  1. Click Migration

 

 

  1. Click the drop down menu and select Migrate to Exchange Online

 

 

  1. Select Cutover migration (supported by Exchange Server 2003 and later versions

 

 

  1. Click Next

 

  1. Enter on-premises account credentials

 

 

  1. Click Next

 

 

  1. Enter the on-premise Exchange Server

 

  1. Enter the RPC Proxy Server

 

  1. Click Next

 

  1. Enter a name for the New Migration Batch

 

 

  1. Click Next

 

  1. Select a user from Office 365 to get a report once the migration is completed. You can choose to automatically start the batch or manually start the batch later.

 

 

  1. Click New

 

  1. The new migration batch is created and the status is set to syncing

 

 

This is where we wait for the migration to happen. Depending on the number of accounts and the amount of data, this can take some time. You can view the migration details, by clicking View Details under the Mailbox Status.

You will see the accounts provisioning on the Office 365 account and then start to sync from Exchange 2003 to Office 365.

Provisioning

 

Syncing

 

 

Step 4: Complete The Migration 

When all the accounts are provisioned and the sync from Exchange 2003 to Office 365 is completed, you will get a report emailed to you. Once you get the report, you can complete the migration process.

  1. Migrate Public Folders – Microsoft has released a whitepaper for the companies that have public folders to migrate to Office 365. Migrate from Exchange Public Folders to Microsoft Office 365
  2. Assign Office 365 licenses to all the users. Use this BLOG post and jump to the section about assigning licenses - Creating Cloud Users for the NEW Office 365
  3. Verify that all the DNS records are updated and pointed towards Office 365 services. Use the DNS section in this BLOG post - Adding and Verifying a Domain for the NEW Office 365.  WARNING – Once you change the MX record to point at Office 365, there is some DNS replication time. During this time email will be delivered to either Exchange 2003 or Office 365. It's important to keep your migration batch job running for up to 72 hours after switching the MX record.
  4. Configure the desktops to use Office 365 services - Use this BLOG post - Configuring Desktops for the NEW Office 365
  5. Once you are comfortable that all the email is migrated to Office 365 and the MX record DNS replication is completed, you can stop the migration batch job.

 

At this point the migration is complete and you can retire your Exchange 2003 services. Stay tuned for further Office 365 Step-By-Step posts in the near future.


https://blogs.technet.microsoft.com/canitpro/2014/04/09/step-by-step-migration-of-exchange-2003-server-to-office-365/

Author: Angelo A Vitale
Last update: 2018-12-17 12:23


Step-By-Step: Migrating from Exchange 2007 to Office 365

A cutover migration is the simplest way to get all your existing email into Office 365. As the name implies, it’s a cutover from one service to another. As showcased in a previous post, cutover migrations are supported for Exchange 2003, 2007 and 2010; for organizations with fewer than 1000 mailboxes. The process is pretty straightforward, however, be sure to properly test the migration plan prior to trying to implement.
 
Step 1: Planning

Microsoft has done a great job of providing good quality information for administrators to use, to plan the migration to Office 365. It is always recommended to use the Exchange Deployment Assistant as a guide for all migrations. This site is up to date and will cover most of all the migrations scenarios to Office 365.

  1. Open the Exchange Deployment Assistant site.
     
  2. Once the site is launched, you are presented three options. Since I am doing a simple cutover migration from Exchange Server 2007, I am going to use the Cloud Only option.
     
  3. Click Cloud Only.
     
  4. You are now asked a series of questions on end state goals and existing setup.
     
  5. Answer all the questions.
     
  6. Click the Next arrow.
     
  7. The Exchange Deployment Assistant will generate a step by step guide for you to follow. Make sure to read and understand what you are doing.
     

Step 2: Testing the Existing Setup

Using the guide from the Exchange Deployment Assistant, we need to make sure that our Exchange 2007 infrastructure supports Outlook Anywhere (RPC over HTTP) and Autodiscover. Use the guide to verify the Exchange 2007 setup. Once the setup is verified to be correct, use the Microsoft Remote Connectivity Analyzer to verify Outlook Anywhere (RPC over HTTP). Make sure that you have assigned the correct permissions to the mailboxes that you are migrating.

  1. Open the Microsoft Remote Connectivity Analyzer site.
     
  2. Select the Outlook Anywhere (RPC over HTTP) test.
     
  3. Click Next.
     
  4. Enter all the information that is requested. You will want to verify that you are using Autodiscover to detect server settings.
     
  5. Enter the Verification code.
     
  6. Click Perform Test.

 
Once the test is successful, you can continue to the next step. If it’s successful with warnings, review the warnings and correct them if needed. If the test fails, use the report generated and the guide (Exchange Deployment Assistant) to resolve the issues.

Use the guide and assign the correct permissions to the mailboxes. If you don’t assign the migration account permissions on the mailboxes, they will not migrate.
 
Step 3: Configure Cutover Migration

  1. Open Internet Explorer.
     
  2. Login to the Office 365 Admin Center.
     
  3. Open Exchange Admin Center.
     

     
  4. Click Migration.
     

     
  5. Click the + drop down menu and select Migrate to Exchange Online.
     

     
  6. Select Cutover migration (supported by Exchange Server 2003 and later versions).
     

     
  7. Click Next.
     

  8. Enter on-premises account credentials (this is the same account that you gave full acccess permissions to on all the mailboxes).
     

  9. Click Next.
     

     
     
    When configured properly, Autodiscover should resovle the on-premise Exchange Server and the RPC Proxy Server
     

  10. Click Next.
     

     

  11. Enter a name for the New Migration Batch.
     

  12. Click Next.
     

     

  13. Select a user to get a report once the migration is completed. Multiple accounts can be selected. If you are ready to start the migration, then automatically start the batch. If you are not ready to start the migration, then select manually start the batch later.
     

  14. Click New.
     

     

  15. The new migration batch is created and the status is set to syncing. 
     

     

  16. Depending on the number of accounts and the amount of data, this can take some time to migrate. Migration details can be viewed by clicking View Details under the Mailbox Status providing sight to the accounts being provisioned in Office 365 as well as the start of the sync from Exchange 2007 to Office 365.
     
     
     
     

Step 4: Completion of the Migration

When all the accounts are provisioned and the sync from Exchange 2007 to Office 365 is completed, you will get a report emailed to you. Once you get the report, you can complete the migration process.

  1. Migrate Public Folders – Microsoft has released a whitepaper for the companies that have public folders to migrate to Office 365. Migrating from Exchange Public Folders to Microsoft Office 365.
     
  2. Assign Office 365 licenses to all the users. Details can be found here.
     
  3. Verify that all the DNS records are updated and pointed towards Office 365 services. Details can be found here.
     
    Note: Once you change the MX record to point at Office 365, there is some DNS replication time. During this time, email will be delivered to either Exchange 2007 or Office 365. It’s important to keep your migration batch job running for up to 72 hours after switching the MX record.
     
  4. Configure the desktops to use Office 365 services. Details can be found here.
     
  5. Once you are comfortable that all the email is migrated to Office 365 and the MX record DNS replication is completed, you can stop the migration batch job.

At this point the migration is complete and the Exchange 2007 server can be retired.


https://blogs.technet.microsoft.com/canitpro/2013/11/19/step-by-step-migrating-from-exchange-2007-to-office-365/

Author: Angelo A Vitale
Last update: 2018-12-17 12:24


Step-By-Step: Migration of Exchange 2003 Server to Office 365


While most of the focus happening this week was centered around end of support for Windows XP, many IT professionals also had Microsoft Exchange 2003 top of mind as it too had official support end on April 8th. Those currently running Exchange 2003 do have Office 365 as an option and preparation around a cut over migration might be in order. Cutover migrations are great as they take advantage of your existing setup and has no requirement for a hybrid server.

One point to take into consideration is that a cut over migration is not suitable for companies looking to implement single sign-on. During a cutover migration, the accounts are provisioned as cloud accounts and this will really wreak havoc on any implementation of single sign-on (for this reason, as soon as you setup single sign-on, cutover migrations are disabled on the portal site).

The process is really straightforward and works really well. With any successful migration, we need to plan and test before we implement. The following post will cover a cutover migration from a legacy Exchange 2003 system.

Step 1: Planning

Before we can attempt the migration, we need to know what we are going. Microsoft has done a great job of providing good quality information for administrators to use, to plan the migration to Office 365. I always use the Exchange Deployment Assistant as a guide for all my migrations. This site is up to date and will cover most of all the migrations scenarios to Office 365

  1. Open the Exchange Deployment Assistant site

 Once the site is launched, you are presented three options. Since I am doing a simple cutover migration from Exchange Server 2003, I am going to use the Cloud Only option

 Click Cloud Only

 

 You are now asked a series of questions on end state goals and existing setup

 

 Answer all the questions

 Click the next arrow

 The Exchange Deployment Assistant will generate a step by step guide for you to follow. Make sure to read and understand what you are doing.

 

 Step 2: Testing the Existing Setup

 Using our guide from the Exchange Deployment Assistant, we need to make sure that our Exchange 2003 infrastructure supports RPC over HTTP and Outlook Anywhere. Use the guide to verify the Exchange 2003 setup. Once the setup is verified to be correct, use the Microsoft Remote Connectivity Analyzer to verify RPC over HTTP and Outlook Anywhere.

  1. Open the Microsoft Remote Connectivity Analyzer site

 Select the Outlook Anywhere (RPC over HTTP) test 

 

  1. Click Next

 Enter all the information that is requested. Keep in mind that with Exchange 2003, using autodiscover to detect the settings will not work. Exchange 2003 doesn't support autodiscover. These values will have to be entered manually

 

 Enter the Verification code

 Click Perform Test

 The test will start

 

 

  1. Once the test is successful, you can continue to the next step. If it's successful with warnings, review the warnings and correct them if needed. I get a warning here, because I am using a multi-name UCC certificate. If the test fails, use the report generated and the guide (Exchange Deployment Assistant) to resolve the issues.

 

 Step 3: Configure Cutover Migration

 Login to the Office 365 Admin Center

 Open Exchange Admin Center

 

 

  1. Click Migration

 

 Click the drop down menu and select Migrate to Exchange Online

 

 Select Cutover migration (supported by Exchange Server 2003 and later versions

 

 Click Next

 Enter on-premises account credentials

 

 Click Next

 

 Enter the on-premise Exchange Server

 Enter the RPC Proxy Server

 Click Next

 Enter a name for the New Migration Batch

 

 Click Next

 Select a user from Office 365 to get a report once the migration is completed. You can choose to automatically start the batch or manually start the batch later.

 

 Click New

 The new migration batch is created and the status is set to syncing

 

 This is where we wait for the migration to happen. Depending on the number of accounts and the amount of data, this can take some time. You can view the migration details, by clicking View Details under the Mailbox Status.

You will see the accounts provisioning on the Office 365 account and then start to sync from Exchange 2003 to Office 365.

Provisioning

 

Syncing

 

 Step 4: Complete The Migration 

When all the accounts are provisioned and the sync from Exchange 2003 to Office 365 is completed, you will get a report emailed to you. Once you get the report, you can complete the migration process.

  1. Migrate Public Folders – Microsoft has released a whitepaper for the companies that have public folders to migrate to Office 365. Migrate from Exchange Public Folders to Microsoft Office 365
  2. Assign Office 365 licenses to all the users. Use this BLOG post and jump to the section about assigning licenses - Creating Cloud Users for the NEW Office 365
  3. Verify that all the DNS records are updated and pointed towards Office 365 services. Use the DNS section in this BLOG post - Adding and Verifying a Domain for the NEW Office 365.  WARNING – Once you change the MX record to point at Office 365, there is some DNS replication time. During this time email will be delivered to either Exchange 2003 or Office 365. It's important to keep your migration batch job running for up to 72 hours after switching the MX record.
  4. Configure the desktops to use Office 365 services - Use this BLOG post - Configuring Desktops for the NEW Office 365
  5. Once you are comfortable that all the email is migrated to Office 365 and the MX record DNS replication is completed, you can stop the migration batch job.

 At this point the migration is complete and you can retire your Exchange 2003 services. Stay tuned for further Office 365 Step-By-Step posts in the near future.


https://blogs.technet.microsoft.com/canitpro/2014/04/09/step-by-step-migration-of-exchange-2003-server-to-office-365/

Author: Angelo A Vitale
Last update: 2018-12-18 06:55


Step-By-Step: Migrating from Exchange 2007 to Office 365


A cutover migration is the simplest way to get all your existing email into Office 365. As the name implies, it’s a cutover from one service to another. As showcased in a previous post, cutover migrations are supported for Exchange 2003, 2007 and 2010; for organizations with fewer than 1000 mailboxes. The process is pretty straightforward, however, be sure to properly test the migration plan prior to trying to implement.
 
Step 1: Planning

Microsoft has done a great job of providing good quality information for administrators to use, to plan the migration to Office 365. It is always recommended to use the Exchange Deployment Assistant as a guide for all migrations. This site is up to date and will cover most of all the migrations scenarios to Office 365.

  1. Open the Exchange Deployment Assistant site.
     
  2. Once the site is launched, you are presented three options. Since I am doing a simple cutover migration from Exchange Server 2007, I am going to use the Cloud Only option.
     
  3. Click Cloud Only.
     
  4. You are now asked a series of questions on end state goals and existing setup.
     
  5. Answer all the questions.
     
  6. Click the Next arrow.
     
  7. The Exchange Deployment Assistant will generate a step by step guide for you to follow. Make sure to read and understand what you are doing.
     

Step 2: Testing the Existing Setup

Using the guide from the Exchange Deployment Assistant, we need to make sure that our Exchange 2007 infrastructure supports Outlook Anywhere (RPC over HTTP) and Autodiscover. Use the guide to verify the Exchange 2007 setup. Once the setup is verified to be correct, use the Microsoft Remote Connectivity Analyzer to verify Outlook Anywhere (RPC over HTTP). Make sure that you have assigned the correct permissions to the mailboxes that you are migrating.

  1. Open the Microsoft Remote Connectivity Analyzer site.
     
  2. Select the Outlook Anywhere (RPC over HTTP) test.
     
  3. Click Next.
     
  4. Enter all the information that is requested. You will want to verify that you are using Autodiscover to detect server settings.
     
  5. Enter the Verification code.
     
  6. Click Perform Test.

 Once the test is successful, you can continue to the next step. If it’s successful with warnings, review the warnings and correct them if needed. If the test fails, use the report generated and the guide (Exchange Deployment Assistant) to resolve the issues.

Use the guide and assign the correct permissions to the mailboxes. If you don’t assign the migration account permissions on the mailboxes, they will not migrate.
 
Step 3: Configure Cutover Migration

  1. Open Internet Explorer.
     
  2. Login to the Office 365 Admin Center.
     
  3. Open Exchange Admin Center.
     

     
  4. Click Migration.
     

     
  5. Click the + drop down menu and select Migrate to Exchange Online.
     

     
  6. Select Cutover migration (supported by Exchange Server 2003 and later versions).
     

     
  7. Click Next.
     

  8. Enter on-premises account credentials (this is the same account that you gave full acccess permissions to on all the mailboxes).
     

  9. Click Next.
     

     
     
    When configured properly, Autodiscover should resovle the on-premise Exchange Server and the RPC Proxy Server
     

  10. Click Next.
     

     

  11. Enter a name for the New Migration Batch.
     

  12. Click Next.
     

     

  13. Select a user to get a report once the migration is completed. Multiple accounts can be selected. If you are ready to start the migration, then automatically start the batch. If you are not ready to start the migration, then select manually start the batch later.
     

  14. Click New.
     

     

  15. The new migration batch is created and the status is set to syncing. 
     

     

  16. Depending on the number of accounts and the amount of data, this can take some time to migrate. Migration details can be viewed by clicking View Details under the Mailbox Status providing sight to the accounts being provisioned in Office 365 as well as the start of the sync from Exchange 2007 to Office 365.
     
     
     
     

Step 4: Completion of the Migration

When all the accounts are provisioned and the sync from Exchange 2007 to Office 365 is completed, you will get a report emailed to you. Once you get the report, you can complete the migration process.

  1. Migrate Public Folders – Microsoft has released a whitepaper for the companies that have public folders to migrate to Office 365. Migrating from Exchange Public Folders to Microsoft Office 365.
     
  2. Assign Office 365 licenses to all the users. Details can be found here.
     
  3. Verify that all the DNS records are updated and pointed towards Office 365 services. Details can be found here.
     
    Note: Once you change the MX record to point at Office 365, there is some DNS replication time. During this time, email will be delivered to either Exchange 2007 or Office 365. It’s important to keep your migration batch job running for up to 72 hours after switching the MX record.
     
  4. Configure the desktops to use Office 365 services. Details can be found here.
     
  5. Once you are comfortable that all the email is migrated to Office 365 and the MX record DNS replication is completed, you can stop the migration batch job.

At this point the migration is complete and the Exchange 2007 server can be retired.
 

Be sure to take advantage of the Microsoft Virtual Academy to learn additional aspects of Office 365 to better enable your organization.

https://blogs.technet.microsoft.com/canitpro/2013/11/19/step-by-step-migrating-from-exchange-2007-to-office-365/

Author: Angelo A Vitale
Last update: 2018-12-18 06:56


Step-By-Step: Setting up AD FS and Enabling Single Sign-On to Office 365

This is a typical highly available setup into Office 365. Ideally this server will be installed as virtual servers on multiple Hyper-V hosts. Think about redundancy, not only in the virtual servers, but in the Hyper-V servers as well. Install one AD FS and one AD FS Proxy on one Hyper-V host and the other AD FS and AD FS Proxy on another Hyper-V host. This prevents loss of service from a hardware failure. Keep in mind that once you are using Single Sign-on with Office 365, you rely on your local Active Directory for authentication. Both video and printed steps have provided to ease your implementation of AD FS and SSO.

 


Prerequisite

  1. Download Windows Server 2012
  2. Download Hyper-V Server 2012
  3. Should you not have access to a lab, follow this Step-By-Step to setup your own lab

 

Prepare the Base Servers

AD FS Server

  1. Base build the AD FS server with Windows Server 2012
  2. Setup a connection to the internal network
  3. Add the server to the local domain
  4. Update the server with all Windows Updates

AD FS Proxy Server

  1. Base Build the AD FS Proxy server with Windows Server 2012
  2. Setup a connection to the DMZ network (verify connectivity to the AD FS server on port 443)
  3. DO NOT add the server to the local domain
  4. Update the server with all Windows Updates

Directory Sync Server

  1. Base build the Directory Synchronization server with Windows Server 2012
  2. Setup a connection to the internal network
  3. Add the server to the local domain
  4. Update the server with all Windows Updates

Prepare Active Directory

Add UPN Suffix

If you are using and internal domain name that doesn’t match the domain that you want to federate with Office 365 you will have to add a custom UPN suffix that matches that external name space. If you need to add the UPN suffix, please follow these instructions,http://support.microsoft.com/kb/243629

Example

Internal Domain Name – contoso.local

Desired Federated Domain – contoso.com

 

Clean up Active Directory

This makes sense for so many reasons, but the most for Directory Sync. I generally make an OU for all the Office 365 Services; then create more OUs within that one for all the user accounts, services accounts, groups, servers and computers. This will allow us to filter on user accounts and groups when we enable Directory Synchronization with Office 365. The less number of objects that you sync with Office 365 is better. If you have thousands of objects replicating, that don’t need to be, things will get messy really quick. Keep it clean and neat. This will prevent mistakes and keep you head ache free.

 

Setting up AD FS requires the use of a third party SSL certificate. In a production situation, I would recommend that a single name SSL certificate. Wildcard and multi-name certificates will work, but I like to keep things simple and use a standard SSL certificate in a production situation. Make sure that the common name matches what you plan to call the AD FS server farm. Microsoft best practices recommends that you use the host name, STS (secure token service). In the example below, I have used the value sts.domain.com.

 

Create the SSL Certificate Request (CSR)

  1. Open Server Manager
     
  2. Click Tools
     
  3. Click Internet Information Services (IIS) Manager

     

  4. Select the local server
  5. Select Server Certificates
  6. Click Open Feature (actions pane)

     

  7. Click Create Certificate Request

     

  8. Fill out the certificate request properties. Make sure that the common name matches what you plan to call the AD FS server farm. Microsoft best practices recommends that you use the host name STS (secure token service). In the example below, I have used the valuests.domain.com.

     

  9. Click Next

     

  10. Leave the Cryptographic service provider at the default
  11. Change the Bit Length to 2048
  12. Click Next

     

  13. Select a location for the request file
  14. Click Finish

 

Fulfill the Certificate Signing Request (CSR)

We need to take the CSR generated in the last step to a third party SSL certificate provider. I choose to use GoDaddy. Here are GoDaddy’s instructions to fulfill the CSR at their site – Requesting a Standard or Wildcard SSL Certificate. Once the certificate is issued, download the completed CSR to the AD FS server.

 

Complete the Certificate Request (CSR)

 

  1. Open Server Manager
     
  2. Click Tools
     
  3. Click Internet Information Services (IIS) Manager

     

  4. Select the local server
     
  5. Select Server Certificates
     
  6. Click Open Feature (actions pane)

     

  7. Click Complete Certificate Request

     

  8. Select the path to the complete CSR file that you competed and downloaded from the third party certificate provider
  9. Enter the friendly name for the certificate
  10. Select Personal as the certificate store
  11. Click OK

     

  12. The certificate will be added 

***Note*** The certificate shown below is a multi-name SSL certificate for my lab environment. When your certificate is added, it should show sts.domain.com, which matches the request.

 

Assign the Completed SSL Certificate

Now that we have the third party certificate completed on the server, we need to assign and bind it to the default website (HTTPS port 443).

  1. Expand the local server
     
  2. Expand Sites
     
  3. Select Default Web Site
     
  4. Click Bindings (actions pane)

     

  5. Click Add

     

  6. Change the type to HTTPS
     
  7. Select your certificate from the drop down menu.

    ***Note*** The certificate shown below is a multi-name SSL certificate for my lab environment. When you select your certificate, it should show sts.domain.com, which matches the competed certificate.

  8. Click OK

     

  9. Click Close

     

  10. Close IIS Manager

Now that we have the required software installed and the certificate in place, we can finally configure the AD FS role and federate with Microsoft.

 

Configure Local AD FS Federation Server

 

  1. Open Server Manager

     

  2. Click Tools

     

  3. Click AD FS Management

     

  4. Click AD FS Federation Server Configuration Wizard

     

  5. Create a new Federation Service

     

  6. New Federation Server Farm – Choose this option all the time, even if you only plan on deploying one server. If you choose Stand-alone federation server, then you won’t be able to add more servers.

     

  7. Click Next

     

  8. SSL Certificate – This should be pre-populated. If it isn’t, go back and assign/bind the third party certificate to the default web site

     

  9. Federation Service Name – This should match the SSL certificate name

     

    *** NOTE *** Since I am using a multi-name certificate in a lab environment, my SSL certificate name and Federation Service name don’t match. This is not recommended for production environments. Use best practices always; a single name certificate.

     

  10. Click Next

     

  11. Enter the AD FS service account name and password

     

  12. Click Next

     

  13. Click Next

     

  14. All green check marks mean everything is setup correctly

     

  15. Click Close

 

Configure Federation Trust with Office 365

 

Now that we have our side of the federation setup, we can complete the federation with Office 365

  • Open the Desktop on the AD FS server

     

  • Windows Azure Active Directory Module for Windows PowerShell

     

  • Right Click and Run As Administrator

     

  • Set the credential variable
    • $cred=Get-Credential

     

  • Enter a Global Administrator account from Office 365. I have a dedicated tenant (@domain.onmicrosoft.com) service account setup for AD FS and Directory Syncronization.

     

  • Connect to Microsoft Online Services with the credential variable set previously
    • Connect-MsolService –Credential $cred

 

  • Set the MSOL ADFS Context server, to the ADFS server
    • Set-MsolADFSContext –Computer adfs_servername.domain_name.com

 

  • Convert the domain to a federated domain
    • Convert-MsolDomainToFederated –DomainName domain_name.com

 

  • Successful Federation
    • Successfully updated ‘domain_name.com‘ domain.

 

  • Verify federation
    • Get-MsolFederationProperty –DomainName domain_name.com

This completes the setup for federation to Office 365. Keep in mind that before you can successfully use single sign-on with Office 365, you will need to setup and configure Directory Synchronization. After Directory Synchronization is setup, you will have to license the synchronized user in Office 365. This will provision the services for the user. If they want to access Office 365 from outside the internal network, the AD FS Proxy server needs to be setup and configured.

Author:
Last update: 2019-03-15 09:50


DFS Namespaces service and its configuration data on a computer that is running Windows Server 2003 or Windows Server 2008

INTRODUCTION


This article discusses the following topics to help you create a namespace:

  • Storage locations for configuration data.
  • Examples of how data becomes inconsistent.
  • Methods that you can use to remove orphaned configuration data.
  • Symptoms and error messages that you may receive.

More Information

DFS Namespaces configuration storage locations

The following locations store different configuration data for the Distributed File System (DFS) Namespaces:

  • Active Directory Domain Services (AD DS) stores domain-based namespace configuration data in one or more objects that contain namespace server names, folder targets, and various other configuration data.
  • The namespace servers maintain shares for each namespace hosted.
  • The registry keys on the domain-based namespace servers store namespace memberships. 

    Note On the stand-alone namespace servers, registry keys store all the namespace configuration data.

If any subset of the configuration data is missing or invalid, you may be unable to manage the namespace. Additionally, you may receive many different error messages when you manage DFS Namespaces by using the DFS Namespaces Microsoft Management Console (MMC) snap-in, the Dfsutil.exe tool, or the Dfscmd.exe tool or when a client accesses the namespace. See the "Symptoms and error messages" section for a list of possible error messages. 

Examples of how DFS Namespaces configuration data may become inconsistent

  • The dfsutil/clean command is performed on a domain-based namespace server. This command removes the namespace registry data. The configuration data that is stored in the AD DS remains and is enumerated by the DFS Namespaces MMC snap-in.
  • An authoritative restoration of AD DS is performed to recover a DFS namespace that was deleted by using a DFS management tool such as the DFS Namespaces MMC snap-in or the Dfsutil.exe tool. Although the restoration of AD DS may be successful, the namespace is not operational unless other DFS Namespaces configuration data is also restored or recovered.
  • Restoration of the system state for a namespace server by using a backup that was created before the server became a namespace server.
  • Active Directory replication failures prevent namespace servers from locating the DFS Namespaces configuration data.
  • Incorrect modification or incorrect removal of the share for the namespace on a namespace server.
  • Manual manipulation of the registry or of the AD DS namespace configuration data.

DFS Namespaces configuration cleanup and removal

DFS Namespaces configuration data is managed and maintained by management tools that use DFS APIs. The DFS APIs notify the Active Directory domain controllers and the DFS Namespaces servers about configuration changes. This behavior prevents the configuration data from becoming orphaned and guarantees consistency in the configuration data. If the notification process is inhibited, or if the data is otherwise deleted or lost, follow the cleanup steps that are listed here to remove the configuration data. These changes are not recoverable unless you make a backup of the system state for the domain controller or for the namespace server.

For more information about how to back up the system state of a server that is running Windows Server 2003, visit the following Microsoft Web site:

http://technet.microsoft.com/en-us/library/cc759141.aspx
For more information about how to back up the system state of a server that is running Windows Server 2008, visit the following Microsoft Web site:

http://technet.microsoft.com/en-us/library/cc770266.aspx
Note The following steps should only be used if recovery of the configuration data is not possible or is not desired.

For more information about the recovery process for a DFS namespace, click the following article number to view the article in the Microsoft Knowledge Base:

969382Recovery process of a DFS Namespace in Windows 2003 and 2008 Server

  1. For a domain-based DFS namespace, verify the removal of the AD DS namespace configuration data. Before the removal process, you must accurately identify the object that is associated with the malfunctioning or inconsistent namespace. To remove the AD DS namespace configuration data, follow these steps:
    1. Open the Adsiedit.msc tool. This tool is included in Windows Server 2008 and requires that the AD DS role or tools are installed. This tool is available in Windows Server 2003 Support Tools. 

      For more information about the Adsiedit.msc tool, visit the following Microsoft Web site: 

      http://technet.microsoft.com/en-us/library/cc773354(WS.10).aspx
    2. Locate the domain partition of the domain hosting the domain-based namespace. Move to the following location:CN=Dfs-Configuration,CN=System,DC=
      Note The placeholder is the distinguished name of the domain.

      DFS Namespaces store the configuration objects in this location. "Windows 2000 Server mode" namespaces have an "fTDfs" class object that is named identically to the namespace. "Windows Server 2008 mode" namespaces have an "msDFS-NamespaceAnchor" class object that is named identically to the associated namespace and that may contain additional child objects for any configured folders.
    3. Select the appropriate object such as the "fTDfs" or "msDFS-NamespaceAnchor" object, and then delete it together with any child objects.

      Note Active Directory replication latencies may delay this change operation from propagating to the remote domain controllers.
  2. On any namespace servers that are hosting the namespace, verify the removal of the DFS namespace registry configuration data. If other functioning namespaces are hosted on the server, make sure that the registry key of only the inconsistent namespace is removed. To remove the DFS namespace registry configuration data, follow these steps:
    1. In Registry Editor, locate the configuration registry key of the namespace at the appropriate path by using one of the following paths:

      Domain-based DFSN in "Windows Server 2008 mode"HKEY_LOCAL_MACHINE \Software\Microsoft\Dfs\Roots\domainV2
      Stand-alone DFSNHKEY_LOCAL_MACHINE \Software\Microsoft\Dfs\Roots\Standalone
      Domain-based DFSN in "Windows 2000 Server mode"HKEY_LOCAL_MACHINE\Software\Microsoft\Dfs\Roots\Domain
      For more information about the Windows 2000 Server registry storage locations, click the following article number to view the article in the Microsoft Knowledge Base:

      224384HOW TO: Force Deletion of DFS Configuration Information

    2. If a registry key that is named identically to the inconsistent namespace is found, use the Dfsutil.exe tool to remove the registry key. For example, run the following command:dfsutil /clean /server:servername /share:sharename /verbose

      Note The servername placeholder is the name of the server hosting the namespace and the sharename placeholder is the name of the root share.
      Or, delete the key manually.
    3. On the namespace server, restart the DFS service in Windows Server 2003 or the DFS Namespaces service in Windows Server 2008 to register the change on the service.
  3. Remove the file share that was associated with the namespace from the namespace servers. Failure to follow this step may cause the recreation of the namespace to fail because DFS Namespaces may block the namespace creation.

    Windows Server 2003
    1. Open the Computer Management MMC snap-in. To do this, run the Compmgmt.msc tool.
    2. Expand System Tools, expand Shared Folders, and then click Shares.
    3. Right-click the DFS namespace share, and then click Stop Sharing. If you receive the following error message, you must restart the server and then try again to remove the share by using Computer Management MMC snap-in:"The system cannot stop sharing because the shared folder is a Distributed File System (DFS) namespace root"
    Windows Server 2008
    1. Open the "Share and Storage Management" MMC snap-in. To do this, run the StorageMgmt.msc tool.
    2. Right-click the share of the namespace, and then click Stop Sharing. If you receive the following error message, you must restart the server and then remove the share by using Computer Management MMC snap-in:The system cannot stop sharing because the shared folder is a Distributed File System (DFS) namespace root

Changing the DFS namespace configuration data should only be considered after you evaluate all other recovery options. We recommend that you regularly obtain backups of the system state for the DFS namespace servers and for the domain controllers of domain-based DFS namespaces. These backups may be used to restore the namespace configuration to full operation without the risk of having inconsistent DFS namespace configuration data. 0" style="box-sizing: inherit; outline: none;" rel="box-sizing: inherit; outline: none;">

Symptoms and error messages

DFS Management MMC (Dfsmgmt.msc)

In the Dfsmgmt.msc tool, you may receive the following error messages:

  • \\domain.com\namespace: The Namespace cannot be queried. Element not found.
  • The server you specified already hosts a namespace with this name. Please select another namespace name or another server to host the namespace.
  • A shared folder name "namespace" already exists on the server . If the existing shared folder is used, the security setting specified within the Edit Settings dialog box will not apply. To have a shared folder created with those settings, you must first remove the existing shared folder.
  • The namespace is not unique in the domain in which the namespace server was created. You must go back to choose a new namespace name, or change the namespace type to stand-alone.
  • \\domain.com\namespace1: The namespace server \\servername\namespace1 cannot be added. Cannot create a file when that file already exists.
  • \\domain.com\namespace: The namespace cannot be queried. The system cannot find the file specified.
  • \\domain.com\namespace: The namespace cannot be queried. The device is not ready for use.
  • An error occurred while trying to delete share . The share must be removed from the Distributed File System before it can be deleted.

Distributed File System MMC (Dfsgui.msc)

In the Dfsgui.msc tool, you may receive the following error messages:

  • The specified DFS root does not exist.
  • The DFS root "namespace1" already exists. Please give a different name for the new DFS root.
  • The following error occurred while creating DFS root on server servername: Cannot create a file when that file already exists.
  • The specified DFS root does not exist.
  • The system cannot find the file specified.

Dfsutil.exe

In the Dfsutil.exe tool, you may receive the following error message:

  • System error 1168 has occurred. Element not found.

Dfscmd.exe

In the Dfscmd.exe tool, you may receive the following error messages:

  • System error 1168 has occurred. Element not found.
  • System error 80 has occurred. The file exists.
  • System error 2 has occurred. The system cannot find the file specified.

DFS clients

On a computer that is running the DFS client, you may receive the following error messages:

  • Windows cannot find '\\domain.com\namespace\folder'. Make sure you typed the name correctly, and then try again.
  • File not found.
  • Windows cannot access '\\domain.com\namespace\folder'. Check the spelling of the name. Otherwise, there might be a problem with your network.
    Additional details: 
    Error code: 0x80070002 The system cannot find the file specified.
  • Windows cannot access \\domain.com\namespace1. Error code 0x80070035 The network path was not found.
  • \\domain.com\namespace\folder is not accessible. You might not have permission to use this network resource. . The network path was not found.
  • Configuration information could not be read from the domain controller, either because the machine is unavailable, or access has been denied.
  • Windows cannot access \\domain.com\namespace. Check the spelling of the name. Otherwise, there might be a problem with your network.
    Additional details: 
    Error code: 0x80070035 The network path was not found.
  • The system cannot find the path specified.

https://support.microsoft.com/en-us/help/977511/about-the-dfs-namespaces-service-and-its-configuration-data-on-a-compu


Author: Angelo A Vitale
Last update: 2018-12-11 12:08


Automatic creation of user folders for home, roaming profile and redirected folders.

Hi Robhere again. Periodically we’re asked “what is the best way to auto-create home, roaming profile, and folder redirection folders instead of Administrators creating and configuring the NTFS permissions manually?” The techniques in this post requires you to use the environment variable %USERNAME% in the user’s home folder attribute when you create the users account.

We will also make use of the “$” symbol in the share name; which makes the share hidden from anyone who attempts to list the shares on the file server via computer browsing.

Alright let’s get started.

Home directory:

Home folders are created automatically when the user’s account is created and an administrator has enabled the use of home folders. You change the home folders for the user afterwards, but we are all about making the Admin’s life easier.

Create the folder and enable sharing

As you can see we create the share name and added a dollar sign ($) to the end.

Next, we’ll configure the share permissions. It is important to note that there is a difference in the default permissions for a share between Windows NT/Windows 2000 and Windows Server 2003. By default, Windows 2000 gives the Everyone group Full Control permissions. Windows Server 2003 gives the Everyone group Read permissions. However, we’ll change this to:

Administrators: Full Control 
System: Full Control 
Authenticated Users: Full Control

If you expect or want users to be able to select their home directory to be available while they are not connected to the network (also known as Offline Files), then you’ll want to make sure you turn on Offline file caching of the HOME$ share. You do this by:

1. Click Offline Settings on Windows 2000 or Caching on Windows Server 2003 or later, which is located on the Sharing tab. 
2. Click Only the files and programs that users specify will be available offline. If you would like more information on the different options and what they mean you can click here
3. Then click OK.

NOTE: You should consider configuring Offline Files settings even if you do not want users to work with files while they are not connected to the network—you’ll want to disable Offline Files by clicking Files or programs from the share will not be available offline.

Configuring NTFS Permissions

Now we need to configure the NTFS permissions, so we need to be on the “Security” tab of the folder we created earlier.

1. Turn off inheritance on the folder and copy the permissions. You do this by:

a. Click the Advanced button found on the Security tab. 
b. Clear Allow inheritable permissions to propagate to this object check box in the Advanced Security Settings dialog box. 
c. Click Copy when prompted by the Security dialog box.

2. Click OK to return to the Security tab. Ensure we have the following permissions set:

Administrators: Full Control 
System: Full Control 
Creator Owner: Full Control 
Authenticated Users: Read & Execute, List Folder Contents, Read

3. Change permissions for Authenticated Users so they cannot access other users’ folders. You do this by:

a. Click Advanced on the Security tab. 
b. Click Authenticated Users, and then click Edit. 
c. On the Permissions Entry for HOME dialog box, drop down the Apply onto and select This folder only. 
d. Click OK twice.
Here is a screen shot of this step:
We now have the permissions configured properly. Next, let’s create a user and specify the home folder location. This is done by going to the Profile tab of the user account in Active Directory Users and Computers. In the following screen shot shows an example of a drive mapping.
Yep, the TOM folder got created without a problem:
When we look at the permissions of the TOM folder we see the following:
We see that only Administrators, System, Tom, and Creator Owner have permissions to the folder. Other users do not.

Roaming Profile: 
Configuring roaming profiles uses the same procedure as the home folder share, except for one difference. You should disable Offline Files and you should always hide the profile share using a dollar sign ($).

Since the setup is pretty much exactly the same (except for the share name) so I’m not going to bore you with the same steps as earlier.

The main difference between the roaming profile folder and the home folder is that the roaming profile folder is not created until the user logs on and then logs off. Windows creates the profile directory and copies the profile to the share once the user has completed one successful logon and logoff.

You configure the profile location on the Profile or Terminal Services Profile tab within Active Directory Users and Computers. Type a UNC path to where Windows should create the user profile. The following screen shot gives you an example a user account configured with a profile path.

Folder Redirection: 
For the most part the share and NTFS permissions are the same as the Home folder configuration except we need to replace Authenticated Users with the Everyone group. This is required for Windows to automatically create the redirected folders. These two KB articles provide more information:

291087 Event ID 101 and Event ID 1000 Messages May Be Displayed When Folder 
http://support.microsoft.com/?id=291087 
274443 How to dynamically create security-enhanced redirected folders by using 
http://support.microsoft.com/?id=274443

Create the folder and enable sharing

So, we need to create a folder on a file server and enable it for sharing, again I would recommend that you hide the share using the dollar sign ($) at the end of the share name.

If you expect or want users to be able to select their home directory to be available while they are not connected to the network (also known as Offline Files), then you’ll want to make sure you turn on Offline file caching of the HOME$ share. You do this by:

1. Click Offline Settings on Windows 2000 or Caching on Windows Server 2003 or later, which is located on the Sharing tab. 
2. Click Only the files and programs that users specify will be available offline. If you would like more information on the different options and what they mean you can click here
3. Then click OK.

We will also need to set the following permissions for the share:

Administrators: Full Control 
System: Full Control 
Everyone: Full Control

Configuring NTFS Permissions

We need to configure NTFS permissions for the newly created folder. You’ll want to remove inheritance from this folder, as we did when configuring home folders.

1. Turn off inheritance on the folder and copy the permissions. You do this by:

a. Click the Advanced button found on the Security tab. 
b. Clear Allow inheritable permissions to propagate to this object check box in the Advanced Security Settings dialog box. 
c. Click Copy when prompted by the Security dialog box.

2. Click OK to return to the Security tab. Ensure we have the following permissions set:

Administrators: Full Control 
System: Full Control 
Creator Owner: Full Control 
Everyone: Read & Execute, List Folder Contents, Read

3. Now we need change the permissions a bit for “Everyone” so that they do not have any permission to other users’ folders. This is done by doing the following:

a. Click Advanced on the Security tab. 
b.Click Everyone, and then click Edit. 
c. On the Permissions Entry for FldrRedir dialog box, drop down Apply onto and select This folder only. 
d. Click OK twice.
Here is a screen shot of this step:

4. Configuring Folder Redirection settings within Group Policy:

a. Use the Group Policy Management Console (GPMC) and edit the GPO containing the Folder Redirection settings you want modified. Configure each from the following list to use the Basic – Redirect everyone’s folder to the same location Folder Redirection setting. Type the UNC path listed in the table into the Root Path setting for each folder listed in the following table.
Redirected Folder



UNC Path



Application Data



\\contoso-rt-mem1\FldrRedir$



Desktop



\\contoso-rt-mem1\FldrRedir$



My Documents



\\contoso-rt-mem1\FldrRedir$



Start Menu



\\contoso-rt-mem1\FldrRedir$



Here is a screen shot of Application Data being redirected:


You can see that Windows shows you the entire path used for the Folder Redirection. So although we didn’t specify the user’s name in the Root Path, the redirection example shows the folder path as: \\contoso-rt-mem1\FldrRedir$\Clair\Application Data

b. By default, Administrators do not have permissions to users’ redirected folders. If you require the ability to go into the users folders you will want to go to the “Settings” Tab, and uncheck: “Grant the user exclusive rights to” on each folder that is redirected. This allows Administrators to enter the users redirected folder locations without taking ownership of the folder and files.

When you’re all done, you can kick back and enjoy the easy life of being an administrator. Now when you create the user and define the home path it will create the user’s home folder immediately. When Group Policy applies Folder Redirection; folders are created automatically. And, when the user logs off their roaming profile folders will be created after the first logon.

This last part is for the former Novell Admins out there. Yes, you could use Access Based Enumeration (ABE) on these new shares; however if there is going to a lot of user folders on any one of these shares you could experience degradation of performance. Enabling ABE on a share does come at a price of performance. If you are still all hyped up to enable this feature please read ABE whitepaper available information so that you make an informed decision.

https://blogs.technet.microsoft.com/askds/2008/06/30/automatic-creation-of-user-folders-for-home-roaming-profile-and-redirected-folders/

Author: Angelo A Vitale
Last update: 2018-12-11 12:11


Deploy Folder Redirection with Offline Files

Applies To: Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Vista

This topic describes how to use Windows Server to deploy Folder Redirection with Offline Files to Windows client computers.

For a list of recent changes to this topic, see Change history.

System_CAPS_ICON_important.jpgImportant
Due to the security changes made in MS16-072, we updated Step 3: Create a GPO for Folder Redirection of this topic so that Windows can properly apply the Folder Redirection policy (and not revert redirected folders on affected PCs).

 

Prerequisites

Hardware requirements

Folder Redirection requires an x64-based or x86-based computer; it is not supported by Windows® RT.

Software requirements

Folder Redirection has the following software requirements:

  • To administer Folder Redirection, you must be signed in as a member of the Domain Administrators security group, the Enterprise Administrators security group, or the Group Policy Creator Owners security group.
  • Client computers must run Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, or Windows Server 2008.
  • Client computers must be joined to the Active Directory Domain Services (AD DS) that you are managing.
  • A computer must be available with Group Policy Management and Active Directory Administration Center installed.
  • A file server must be available to host redirected folders.

    • If the file share uses DFS Namespaces, the DFS folders (links) must have a single target to prevent users from making conflicting edits on different servers.
    • If the file share uses DFS Replication to replicate the contents with another server, users must be able to access only the source server to prevent users from making conflicting edits on different servers.
    • When using a clustered file share, disable continuous availability on the file share to avoid performance issues with Folder Redirection and Offline Files. Additionally, Offline Files might not transition to offline mode for 3-6 minutes after a user loses access to a continuously available file share, which could frustrate users who aren’t yet using the Always Offline mode of Offline Files.
System_CAPS_ICON_note.jpgNote
To use new features in Folder Redirection, there are additional client computer and Active Directory schema requirements. For more information, see Folder Redirection, Offline Files, and Roaming User Profiles.

Step 1: Create a folder redirection security group

If your environment is not already set up with Folder Redirection, the first step is to create a security group that contains all users to which you want to apply Folder Redirection policy settings.

To create a security group for Folder Redirection

  1. Open Server Manager on a computer with Active Directory Administration Center installed.
  2. On the Tools menu, click Active Directory Administration Center. Active Directory Administration Center appears.
  3. Right-click the appropriate domain or OU, click New, and then click Group.
  4. In the Create Group window, in the Group section, specify the following settings:

    • In Group name, type the name of the security group, for example: Folder Redirection Users.
    • In Group scope, click Security, and then click Global.
  5. In the Members section, click Add. The Select Users, Contacts, Computers, Service Accounts or Groups dialog box appears.
  6. Type the names of the users or groups to which you want to deploy Folder Redirection, click OK, and then click OK again.

Step 2: Create a file share for redirected folders

If you do not already have a file share for redirected folders, use the following procedure to create a file share on a server running Windows Server 2012.

System_CAPS_ICON_note.jpgNote
Some functionality might differ or be unavailable if you create the file share on a server running another version of Windows Server.

To create a file share on Windows Server 2012

  1. In the Server Manager navigation pane, click File and Storage Services, and then click Shares to display the Shares page.
  2. In the Shares tile, click Tasks, and then click New Share. The New Share Wizard appears.
  3. On the Select Profile page, click SMB Share – Quick. If you have File Server Resource Manager installed and are using folder management properties, instead click SMB Share - Advanced.
  4. On the Share Location page, select the server and volume on which you want to create the share.
  5. On the Share Name page, type a name for the share (for example, Users$) in the Share name box.

    System_CAPS_ICON_tip.jpgTip
    When creating the share, hide the share by putting a $ after the share name. This will hide the share from casual browsers.

  6. On the Other Settings page, clear the Enable continuous availability checkbox, if present, and optionally select the Enable access-based enumeration and Encrypt data access checkboxes.
  7. On the Permissions page, click Customize permissions…. The Advanced Security Settings dialog box appears.
  8. Click Disable inheritance, and then click Convert inherited permissions into explicit permission on this object.
  9. Set the permissions as described Table 1 and shown in Figure 1, removing permissions for unlisted groups and accounts, and adding special permissions to the Folder Redirection Users group that you created in Step 1.

    The Advanced Security Settings window.

    Figure 1 Setting the permissions for the redirected folders share
  10. If you chose the SMB Share - Advanced profile, on the Management Properties page, select the User Files Folder Usage value.
  11. If you chose the SMB Share - Advanced profile, on the Quota page, optionally select a quota to apply to users of the share.
  12. On the Confirmation page, click Create.

Table 1 Required permissions for the file share hosting redirected folders

User Account Access Applies to
System Full control This folder, subfolders and files
Administrators Full Control This folder only
Creator/Owner Full Control Subfolders and files only
Security group of users needing to put data on share (Folder Redirection Users) List folder / read data1

Create folders / append data1

Read attributes1

Read extended attributes1

Read permissions1
This folder only
Other groups and accounts None (remove)


1 Advanced permissions

Step 3: Create a GPO for Folder Redirection


If you do not already have a GPO created for Folder Redirection settings, use the following procedure to create one.

To create a GPO for Folder Redirection

  1. Open Server Manager on a computer with Group Policy Management installed.
  2. From the Tools menu click Group Policy Management. Group Policy Management appears.
  3. Right-click the domain or OU in which you want to setup Folder Redirection and then click Create a GPO in this domain, and Link it here.
  4. In the New GPO dialog box, type a name for the GPO (for example, Folder Redirection Settings), and then click OK.
  5. Right-click the newly created GPO and then clear the Link Enabled checkbox. This prevents the GPO from being applied until you finish configuring it.
  6. Select the GPO. In the Security Filtering section of the Scope tab, select Authenticated Users, and then click Remove to prevent the GPO from being applied to everyone.
  7. In the Security Filtering section, click Add.
  8. In the Select User, Computer, or Group dialog box, type the name of the security group you created in Step 1 (for example, Folder Redirection Users), and then click OK.
  9. Click the Delegation tab, click Add, type Authenticated Users, click OK, and then click OK again to accept the default Read permissions.

    This step is necessary due to security changes made in MS16-072.
System_CAPS_ICON_important.jpgImportant
Due to the security changes made in MS16-072, you now must give the Authenticated Users group delegated Read permissions to the Folder Redirection GPO - otherwise the GPO won't get applied to users, or if it's already applied, the GPO is removed, redirecting folders back to the local PC. For more info, see Deploying Group Policy Security Update MS16-072.

Step 4: Configure folder redirection with Offline Files

After creating a GPO for Folder Redirection settings, edit the Group Policy settings to enable and configure Folder Redirection, as discussed in the following procedure.

System_CAPS_ICON_note.jpgNote
Offline Files is enabled by default for redirected folders on Windows client computers, and disabled on computers running Windows Server, unless changed by the user. To use Group Policy to control whether Offline Files is enabled, use the Allow or disallow use of the Offline Files featurepolicy setting.

For information about some of the other Offline Files Group Policy settings, see Enable Advanced Offline Files Functionality, and Configuring Group Policy for Offline Files.

To configure Folder Redirection in Group Policy

  1. In Group Policy Management, right-click the GPO you created (for example, Folder Redirection Settings), and then click Edit.
  2. In the Group Policy Management Editor window, navigate to User Configuration, then Policies, then Windows Settings, and then Folder Redirection.
  3. Right-click a folder that you want to redirect (for example, Documents), and then click Properties.
  4. In the Properties dialog box, from the Setting box click Basic - Redirect everyone’s folder to the same location.

    System_CAPS_ICON_note.jpgNote
    To apply Folder Redirection to client computers running Windows XP or Windows Server 2003, click the Settings tab and select the Also apply redirection policy to Windows 2000, Windows 2000 Server, Windows XP, and Windows Server 2003 operating systemscheckbox.

  5. In the Target folder location section, click Create a folder for each user under the root path and then in the Root Path box, type the path to the file share storing redirected folders, for example: \\fs1.corp.contoso.com\users$
  6. Click the Settings tab, and in the Policy Removal section, optionally click Redirect the folder back to the local userprofile location when the policy is removed (this setting can help make Folder Redirection behave more predictably for adminisitrators and users).
  7. Click OK, and then click Yes in the Warning dialog box.

Step 5: Enable the Folder Redirection GPO

Once you have completed configuring the Folder Redirection Group Policy settings, the next step is to enable the GPO, permitting it to be applied to affected users.

System_CAPS_ICON_tip.jpgTip
If you plan to implement primary computer support or other policy settings, do so now, before you enable the GPO. This prevents user data from being copied to non-primary computers before primary computer support is enabled.

To enable the Folder Redirection GPO

  1. Open Group Policy Management.
  2. Right-click the GPO that you created, and then click Link Enabled. A checkbox appears next to the menu item.

Step 6: Test Folder Redirection

To test Folder Redirection, sign in to a computer with a user account configured for Folder Redirection. Then confirm that the folders and profiles are redirected.

To test Folder Redirection

  1. Sign in to a primary computer (if you enabled primary computer support) with a user account for which you have enabled Folder Redirection.
  2. If the user has previously signed in to the computer, open an elevated command prompt, and then type the following command to ensure that the latest Group Policy settings are applied to the client computer:



    gpupdate /force  
    
    
  3. Open File Explorer.
  4. Right-click a redirected folder (for example, the My Documents folder in the Documents library), and then click Properties.
  5. Click the Location tab, and confirm that the path displays the file share you specified instead of a local path.

Appendix A: Checklist for deploying Folder Redirection

 

1. Prepare domain
- Join computers to domain
- Create user accounts
2. Create security group for Folder Redirection
- Group name:
- Members:
3. Create a file share for redirected folders
- File share name:
4. Create a GPO for Folder Redirection
- GPO name:
5. Configure Folder Redirection and Offline Files policy settings
- Redirected folders:
- Windows 2000, Windows XP, and Windows Server 2003 support enabled?
- Offline Files enabled? (enabled by default on Windows client computers)
- Always Offline Mode enabled?
- Background file synchronization enabled?
- Optimized Move of redirected folders enabled?
6. (Optional) Enable primary computer support
- Computer-based or User-based?
- Designate primary computers for users
- Location of user and primary computer mappings:
- (Optional) Enable primary computer support for Folder Redirection
- (Optional) Enable primary computer support for Roaming User Profiles
7. Enable the Folder Redirection GPO
8. Test Folder Redirection

Change history

The following table summarizes some of the most important changes to this topic.

Date Description Reason
January 18, 2017 Added a step to Step 3: Create a GPO for Folder Redirection to delegate Read permissions to Authenticated Users, which is now required because of a Group Policy security update. Customer feedback.

Author: Angelo A Vitale
Last update: 2018-12-11 12:17


Disable Windows Defender/Security Essentials

ou will need to disable it through Group Policy. Open Local Group Policy Editor by typing gpedit.msc in Run dialog box.

Run - gpedit

Navigate to Computer Configuration → Administrative Templates → Windows Components → Windows Defender. Then enable Turn off Windows Defender on the right side panel.

Local Group Policy Editor - 2015-05-27 23_22_52

The change takes effect immediately. If you open Windows Defender screen, you will see red warning sign all over the place warning you that your PC is at risk.

Windows Defender - at risk

To re-enable Windows Defender again after disabling it, disable “Turn off Windows Defender” group policy and restart Windows Defender service from Services.

Also note that you can do the same to disable Windows Defender in Windows 10 as well.

Author: Angelo A Vitale
Last update: 2018-12-11 12:20


Folder Redirection Overview, Applies To: Windows 8, Windows Server 2008 R2, Windows Server 2012

Folder Redirection Overview, Applies To: Windows 8, Windows Server 2008 R2, Windows Server 2012
Last Updated 4 months ago

Applies To: Windows 8, Windows Server 2008 R2, Windows Server 2012

Folder Redirection

User settings and user files are typically stored in the local user profile, under the Users folder. The files in local user profiles can be accessed only from the current computer, which makes it difficult for users who use more than one computer to work with their data and synchronize settings between multiple computers. Two technologies exist to address this problem: Roaming Profiles and Folder Redirection. Both technologies have their advantages, and they can be used separately or together to create a seamless user experience from one computer to another. They also provide additional options for administrators managing user data.

Folder Redirection lets administrators redirect the path of a folder to a new location. The location can be a folder on the local computer or a directory on a network file share. Users can work with documents on a server as if the documents were based on a local drive. The documents in the folder are available to the user from any computer on the network. Folder Redirection is located under Windows Settings in the console tree when you edit domain-based Group Policy by using the Group Policy Management Console (GPMC). The path is [Group Policy Object Name]\User Configuration\Policies\Windows Settings\Folder Redirection .

Recent changes to Folder Redirection

Folder Redirection now includes the following features:

  • The ability to redirect more folders in the user profile folders than in earlier Windows operating systems. This includes the Contacts ,Downloads , Favorites , Links , Music , Saved Games , Searches , and Videos folders.
  • The ability to apply settings for redirected folders to Windows® 2000, Windows 2000 Server®, Windows XP, and Windows Server 2003 computers. You have the option to apply the settings that you configure on Windows Server® 2008 R2, Windows® 7, Windows Server 2008, or Windows Vista® only to computers that are running those operating systems, or to apply them to computers that are running earlier Windows operating systems also. For these earlier Windows operating systems, you can apply these settings to folders that can be redirected. These are the Application Data , Desktop , My Documents , My Pictures , and Start Menu folders. This option is available in the Settings tab in the Properties for the folder, under Select the redirection settings for [FolderName] .
  • The option to have the Music , Pictures , and Videos folders follow the Documents folder. In Windows operating systems earlier than Windows Vista, these folders were subfolders of the Documents folder. By configuring this option, you resolve any issues related to naming and folder structure differences between and earlier and more recent Windows operating systems. This option is available in theTarget tab in the Properties for the folder, under Settings .
  • The ability to redirect the Start Menu folder to a specific path for all users. In Windows XP, the Start Menu folder could be redirected only to a shared target folder.
noteNote
This capability is new only to the Start Menu folder. All other redirectable folders in Windows Vista and later versions can also be redirected to a specific path for all users.

Folders that can be redirected

You can use the GPMC to redirect folders.

Folder in Windows 7 and Windows Vista Equivalent Folder in Earlier Windows Operating Systems
AppData/Roaming

Application Data

Contacts

Not Applicable

Desktop

Desktop

Documents

My Documents

Downloads

Not Applicable

Favorites

Not Applicable

Links

Not Applicable

Music

Not Applicable

Pictures

My Pictures

Saved Games

Not Applicable

Searches

Not Applicable

Start Menu

Start Menu

Videos

Not Applicable

 

Advantages of Folder Redirection

  • Even if users log on to different computers on the network, their data is always available.
  • Offline File technology (which is turned on by default) gives users access to the folder even when they are not connected to the network. This is especially useful for people who use portable computers.
  • Data that is stored in a network folder can be backed up as part of routine system administration. This is safer because it requires no action by the user.
  • If you use Roaming User Profiles, you can use Folder Redirection to reduce the total size of your Roaming Profile and make the user logon and logoff process more efficient for the end-user. When you deploy Folder Redirection with Roaming User Profiles, the data synchronized with Folder Redirection is not part of the roaming profile and is synchronized in the background by using Offline Files after the user has logged on. Therefore, the user does not have to wait for this data to be synchronized when they log on or log off as is the case with Roaming User Profiles.
  • Data that is specific to a user can be redirected to a different hard disk on the user's local computer from the hard disk that holds the operating system files. This makes the user's data safer in case the operating system has to be reinstalled.
  • As an administrator, you can use Group Policy to set disk quotas, limiting how much space is taken up by user profile folders.

Selecting a Folder Redirection target

The Target tab of the folder's Properties box enables you to select the location of the redirected folder on a network or in the local user profile. You can choose between the following settings:

  • Basic—Redirect everyone's folder to the same location . This setting enables you to redirect everyone's folder to the same location and is applied to all users included in the Group Policy object (GPO). For this setting, you have the following options in specifying a target folder location:

    • Create a folder for each user under the root path . This option creates a folder in the form \\server\share\User Account Name\Folder Name . Each user has a unique path for their redirected folder.
noteNote
If you enable the Also apply redirection policy to Windows 2000, Windows 2000 Server, Windows XP, and Windows Server 2003 operating systems option on the Settings tab, this option is not available for the Start Menu folder.
  • Redirect to the following location . This option uses an explicit path for the redirection location. This can cause multiple users to share the same path for the redirected folder.
  • Redirect to the local user profile location . This option moves the location of the folder to the local user profile under the Users folder.
  • Advanced—Specify locations for various user groups . This setting enables you to specify redirection behavior for the folder based on the security group memberships for the GPO.
  • Follow the Documents folder . This option is available only for the Music , Pictures , and Videosfolders. This option resolves any issues related to naming and folder structure differences between Windows 7 and Windows Vista, and earlier Windows operating systems. If you choose this option, you cannot configure any additional redirection options or policy removal options for these folders, and settings are inherited from the Documents folder.
noteNote
This behavior also occurs by default if you enable the Also apply redirection policy to Windows 2000, Windows 2000 Server, Windows XP, and Windows Server 2003 operating systems option on the Settings tab when you configure redirection settings for the Documentsfolder.

  • Not configured . This is the default setting. This setting specifies that policy-based folder redirection was removed for that GPO and the folders are redirected to the local user profile location or stay where they are based on the redirection options selected if any existing redirection policies were set. No changes are being made to the current location of this folder.

Configuring additional settings for the redirected folder

In the Settings tab in the Properties box for a folder, you can enable these settings:

  • Grant the user exclusive rights . This setting is enabled by default and is a recommended setting. This setting specifies that the administrator and other users do not have permissions to access this folder.
  • Move the contents of [FolderName] to the new location . This setting moves all the data the user has in the local folder to the shared folder on the network.
  • Also apply redirection policy to Windows 2000, Windows 2000 Server, Windows XP, and Windows Server 2003 operating systems . This enables folder redirection to work withWindows 7 and Windows Vista, and earlier Windows operating systems. This option applies only to redirectable folders in earlier Windows operating systems, which are the Application Data , Desktop , My Documents , My Pictures , and Start Menu folders.
noteNote
The AppData/Roaming (previously Application Data in earlier Windows operating systems) folder in Windows Vista now contains several folders that were previously under the root folder of the User Profile folder in earlier Windows operating systems. For example, in earlier Windows operating systems, the Start Menu folder was not under the Application Data folder. It might not make sense to redirect all the folders under Application Data when you enable the Also apply redirection policy to Windows 2000, Windows 2000 Server, Windows XP, and Windows Server 2003 operating systemssetting. Therefore, if you choose this setting, Windows 7 and Windows Vista do not redirect the following folders automatically: Start Menu , Network Shortcuts , Printer Shortcuts , Templates , Cookies , Sent To . If you do not choose this setting, Windows 7 and Windows Vista automatically redirect all folders under the Application Data folder.

  • Policy Removal . The following table summarizes the behavior of redirected folders and their contents when the GPO no longer applies, based on your selections for policy removal. The following policy removal options are available in the Settings tab, under Policy Removal .
Policy Removal option Selected setting Result
Redirect the folder back to the user profile location when policy is removed

Enabled

  • The folder returns to its user profile location.
  • The contents are copied, not moved, back to the user profile location.
  • The contents are not deleted from the redirected location.
  • The user continues to have access to the contents, but only on the local computer.
Redirect the folder back to the user profile location when policy is removed

Disabled

  • The folder returns to its user profile location.
  • The contents are not copied or moved to the user profile location.
noteNote
If the contents of a folder are not copied to the user profile location, the user cannot see them.




Leave the folder in the new location when policy is removed

Either Enabledor Disabled

  • The folder remains at its redirected location.
  • The contents remain at the redirected location.
  • The user continues to have access to the contents at the redirected folder.

Additional considerations

Author: Angelo A Vitale
Last update: 2018-12-11 12:33


How to Set Up DFS Replication in Windows Server 2012 R2

How to Set Up DFS Replication in Windows Server 2012 R2

image

DFS Replication is an effective way to replicate data between servers across a room or on the other side of the world. DFS Replication uses remote differential compression (RDC) to replicate only the changes in a file on a block by block basis instead of replicating the entire file. Consequently, replication is very efficient even across limited bandwidth connections.

Before setting up replication between servers, the DFS Replication roles need to be installed on each server that is going to participate in the replication group.

Installing the DFS Replication Role

Open Server Manger by clicking on the Server Manager icon on the task bar

image

On the Welcome Tile, under Quick Start, click on Add roles and features to start the Add Roles and Features Wizard. If there's no Welcome Tile, it might be hidden. Click View on the menu bar and click Show Welcome Tile.

image

Read and click Next.

image

Select Roll-based or feature-based installation and click Next.

image

Select Select a server from the server pool and select the server on which you want to install DFS Replication. Click Next.

image

Under Roles, expand File and Storage Services, expand File and iSCSI Services, select DFS Replication and click Next.

If you have not already installed the features required for DFS Replication, the following box will pop up explaining which features and roles will be installed along with DFS Replication.

image

Click Add Features.

image

Back to the Select server roles dialog. It should now show DFS Replication as checked along with the other roles required for DFS Replication. If everything looks ok, click Next.

image

The Select features dialog shows the features that will be added along with the DFS Replication role. Click Next.

image

Review and confirm what's being installed and click Install.

image

Click Close when the installation completes.

Now that the DFS Replication role is installed, we can set up replication.


Configuring Replication Between Two Servers

Go to the start menu.

image

Click on Administrative Tools.

image

Double click DFS Management to launch the DFS Management management console.


image

Right click on Replication in the left pane of the DFS Replication management console.

image

Select New Replication Group to launch the New Replication Group Wizard.

image

Click the Replication group for data collection radio button and click Next.

image

Enter a descriptive and unique name in the Name of replication group text box. By default the Domainbox contains the domain name of the server you're working with. Enter a different domain name if necessary.

image

Enter the name of the server containing the data you wish to replicate in the Name text box and click Next.

image

Click Add to define the folders that contain the data you want to replicate.

image

Enter or browse to the path of a folder to replicate in the Local path of folder to replicate text box. You can enter a custom name to represent the folder or leave it set as the default. Click OK.

image

The folder just added should be in the Replicated folders box. Click Add to add more folders to replicate (folders can be added later). When all the folders are added, click Next.

image

Enter the name of the server that will be the target for the replicated data. Servers in replication groups must be in the same Active Directory domain. Click Next.

image

Enter or browse to the path on the destination server where the replicated data is to reside in the Target folder text box. Verify the replication flow in the Source and target locations box and click Next.

image

There are two methods of bandwidth utilization that DFS Replication can use.

The first is continuous replication. Where replication takes place 24/7. The amount of bandwidth that replication consumes can bet set to full or one of few selections.

The second method is to schedule replication. Scheduled replication can be set to not replicate data during certain times and/or days of the week or at full or limited bandwidths. Replication can be set to replicate at a lower bandwidth during business hours, when network utilization is high, and full bandwidth at night and weekends for example.

Replication bandwidth tuning can be broad with continuous replication or finely tuned by scheduling. It should not be needlessly complex. Try and keep it as simple as possible and only as complex as needed.

Once the replication bandwidth is set, click Next.

image

Review the replication group settings and click Create.

image

Read the dialog about replication delay and click OK.

image

Back in the DFS Management management console we see the newly created replication group. Replication will begin once the changes have been pushed to all the servers. Eventually, depending on bandwidth, etc., data will start showing up in the target folder on the destination server. Enjoy!


http://www.techunboxed.com/2014/07/how-to-set-up-dfs-replication-in.html

Author: Angelo A Vitale
Last update: 2018-12-11 12:42


How to: Setting up Folder Redirection & Roaming User Profiles in a Windows 2012 R2 Domain, Step-by-Step

How to: Setting up Folder Redirection & Roaming User Profiles in a Windows 2012 R2 Domain, Step-by-Step
Last Updated 25 days ago

How to: Setting up Folder Redirection & Roaming User Profiles in a Windows 2012 R2 Domain, Step-by-Step

My corporate client uses Windows 2012 R2 servers in an Active Directory domain that supports about 50 users and 35 workstations and laptops running Windows 10 Pro.

Problem was, we had more staff than computers, and a limited computer budget. However, many of the primary staff were often off-site working at remote field projects, so their vacated office computers could - in theory - be used by other staff working locally on-site. So Management asked - Why can’t staff just use any available office computers? Well, unfortunately, all these computers were inconveniently inaccessible at deadline times and led to unsynchronized user documents, creating a lot of re-work and frustration – and coffee drinking. Instead, we needed a more strategic and tactical process to ensure integrity and consistency of user data, Outlook emails, and calendar events across multiple shared computers.

This is a perfect opportunity to implement “Roaming User Profiles”!

Microsoft defines Roaming User Profiles as the process which:
“Redirects user profiles to a [network] file share so that users receive the same operating system and application settings on multiple [designated] computers. When a user signs in to a computer by using an account that is set up with a file share as the profile path, the user’s profile is downloaded to the local computer and merged with the local profile (if present). When the user signs out of the computer, the local copy of their profile, including any changes, is merged with the server copy of the profile.” (Microsoft Corp., 2016)

We solved the problem by implementing Folder Redirection, along with Roaming User Profiles. Here’s my step-by-step procedure for doing it.


34 Steps total

Step 1: Creating our Test Network Environment


Expand
To help illustrate and clarify some of the concepts involved in implementing Folder Redirection with Roaming Users, we set up a simple domain = HOST.ORG as a testing environment, with its active directory domain controller TESTBOX.HOST.ORG. Note: Be sure that the Windows 2012 R2 has the latest updates.

In this domain, we will select two computers to be shared by two roaming users, as shown in Figure 1. These computers should have the same Windows 10 Version (e.g., 1607 = Windows Anniversary Update, or 1703 = Creators Update).


Step 2: Identify the Roaming Users


Expand
If the roaming users haven’t already been added to the domain, then we need to do that first via Active Directory Users and Computers. They would be set up as regular users, with usernames and passwords. In our test domain, we will designate two users – Andy Alpha and Bill Beta – to become our Roaming Users. See Figure 2. So when the original computer user goes off into the field to work on a mission, then his/her computer becomes available for a Roaming User to log on and do productive work. When the original users return, they log on as they usually do – there is no impact from sharing their computers with Roaming Users.


Step 3: Create a Folder Redirection Security Group


Expand
To start this procedure, we first need to create a Security Group to control access permissions for roaming users and their profiles. We will call this group “FRDsecurity”.

1. Open SERVER MANAGER, and under TOOLS, click on ACTIVE DIRECTORY ADMINISTRATION CENTER. On the left menu, under OVERVIEW, select the domain for this new Security Group = HOST.ORG.

2. Right-Click on HOST.ORG, select NEW, then select GROUP. The “Create Group” panel opens up.

3. Enter the Group name = FRDsecurity, and select Type=Security and Scope=Global. See Figure 3.


Step 4: Add our users to the Security Group


Expand
Scroll down the panel of our Security Group, and on the left menu, select MEMBERS and add the two roaming users and the two Primary Computers – See Figure 4: 
a. User = aalpha = Andy Alpha 
b. User = bbeta = Bill Beta 
c. Computer = AAAA 
d. Computer = BBBB

NOTE: When you try to add the computers as Members, the default Object Types will include ‘Users’ but not ‘Computers’. So click the OBJECT TYPES button and check the Computers box, then OK, and proceed to add the AAAA computer. Repeat this process again to add the BBBB computer.

Then Click OK to close the new Security Group panel.


Step 5: Verify Users are Members of the new Security Group


Expand
To verify, open ACTIVE DIRECTORY USERS AND COMPUTERS, click on the PROPERTIES of both users Andy Alpha and Bill Beta, and check their MEMBERS OF tab now listing the FRDsecurity group. See Figure 5.


Step 6: Associate the roaming users with the computers they will be sharing


Expand
The computers they will be sharing are called “Primary Computers”. So when roaming user Andy logs on to a designated Primary Computer, all his files and folders will be available; after Andy signs off, then Bill can log on to the same computer, and all of Bill files will be available.

By the way, if this particular Primary Computer also happens to be the main working computer of another user, nothing will change for that original user. When the original user logs on, he/she will have their same old Desktop, Documents, Favorites, etc., and they can just go about their business as usual.

• Open the ACTIVE DIRECTORY ADMINISTRATIVE CENTER, and select COMPUTERS, and right-click on each computer that is to be a Primary Computer, and select PROPERTIES – See Figure 6.


Step 7: Find the Distinguished Name of the Primary Computer


Expand
In the left-hand menu, click on EXTENSIONS, open the ATTRIBUTE EDITOR, and scroll down to the Attribute = distinguishedName. See Figure 7.


Step 8: Copy the Distinguished Name of the Primary Computer


Expand
1. Click to highlight the distinguishedName attribute, then click the VIEW button, with result shown in Figure 8.

2. Copy the VALUE and save in a text file for use later. Click CANCEL to close the String Attribute Editor, then CANCEL again to close the computer PROPERTIES panel. The Value for this computer looks like:

CN=AAAA,CN=Computers,DC=host,DC=org

3. In our example, we will be using two Primary Computers, so the saved text file should look like:

CN=AAAA,CN=Computers,DC=host,DC=org 
CN=BBBB,CN=Computers,DC=host,DC=org

4. Now, in the same ACTIVE DIRECTORY ADMINISTRATIVE CENTER, select USERS, and for each user Andy Alpha and Bill Beta, select PROPERTIES.


Step 9: Find the Roaming User’s Attribute for Primary Computer


Expand
In the PROPERTIES panel, select EXTENSIONS, go to the ATTRIBUTE EDITOR tab, and scroll down the list to the attribute =msDS-PrimaryComputer, which currently has the value = 


Step 10: Add Primary Computers to Roaming User List


Expand
1. Click the EDIT button, then paste and ADD all the computers that this user is designated to use as a shared Primary Computer. See Figure 10. 

2. Click OK to close the Editor, and OK again to close the user PROPERTIES panel. 

3. Repeat this process for all designated Roaming Users. 


Step 11: Create a Network Folder for the Roaming Users Profiles & Data


Expand
1. On the Server - in our case, TESTBOX.HOST.ORG - open File Explorer and create a new partition big enough to hold the profiles for all the roaming users. We labelled the partition as “ROAMERS” and used 200 GB. In our Server, it was identified as the Volume J:

a. NOTE: Depending on the business activities of the users, the size of this partition could be estimated at 25 GB per user, including Outlook files. For users involved in media or large files, maybe 75 GB per user would suffice.

2. Next, open SERVER MANAGER and in the left menu, select FILE AND STORAGE SERVICES. The server = TESTBOX should be highlighted, so then click on SHARES in the left menu.

3. In the upper right corner of the SHARES box, click on TASKS pull-down menu, and select NEW SHARE – see Figure 11. 

4. Select SMB SHARE – QUICK, then click NEXT

5. Under SHARE LOCATION, scroll down and select the J: volume, then click NEXT

6. Under SHARE NAME, type the name of the folder which will contain all the users’ profiles. We used the filename “FRDfileshares$” – the “$” sign hides the folder for privacy and security. 


Step 12: Define Path to Network File Share


Expand
1. In the same panel, under LOCAL PATH TO SHARE, you should see the new path “J:\Shares\FRDfileshares$”. Also, under REMOTE PATH TO SHARE, you should see \\TESTBOX\FRDfileshares$. So far, so good. Click NEXT – see Figure 12. 

2. Under OTHER SETTINGS, click to enable the two boxes for: 
a. Enable access-based enumeration, and 
b. Allow caching of share

3. That’s all. Click NEXT 


Step 13: Customize Access Permissions for the File Share


Expand
1. Under PERMISSIONS, click the button CUSTOMIZE PERMISSIONS 

2. Now we should see the panel ADVANCED SECURITY SETTING FOR FRDfileshares$, displaying the name “J:\Shares\FRDfileshares$”

3. At the bottom, click the button DISABLE INHERITANCE, and in the popup, select CONVERT INHERITED PERMISSIONS INTO EXPLICIT PERMISSIONS ON THIS OBJECT

4. Then, at bottom left, click the ADD button to modify permissions of our Security Group

5. In the popup window, click on the top item “SELECT A PRINCIPAL” 


Step 14: Assign these permissions to our Security Group


Expand
1. In the popup box, under ENTER THE OBJECT NAME TO SELECT, type the name of our security group “FRDsecurity”, then click OK 

2. Now, in the new panel PERMISSION ENTRY FOR FRDfileshares$, note the PRINCIPAL displays our security group “FRDsecurity”. Make sure TYPE = Allow, and APPLIES TO = This Folder Only.

3. Next, click on SHOW ADVANCED PERMISSIONS on the right of the panel.

4. In the ADVANCED PERMISSIONS box, check only these three options: 
a. Read Attributes 
b. Read Extended Attributes 
c. Read Permissions

5. Click OK 


Step 15: Edit permissions for our Security Group


Expand
1. Now our FRDsecurity group should show up in the list of Permission Entries. Click APPLY, then OK, then NEXT – see Figure 15. 

2. On the CONFIRMATION page, click the CREATE button on the bottom right

3. The RESULTS page should indicate the new share was successfully created. Click on the CLOSE button. 


Step 16: Verify our New File Share is Official


Expand
To verify, in the SERVER MANAGER panel, we should see our new share “FRDfileshares$”, as shown in Figure 16

Note that the name we gave our partition – ROAMERS – is not significant at all in this process.


Step 17: Create Group Policy Object (GPO) for Folder Redirection


Expand
So now we need to create a Group Policy Object to force specific folders to be redirected to the appropriate user file shares.

1. Open SERVER MANAGER and click on GROUP POLICY MANAGEMENT. Open the Forest, and under Domains, right-click on our domain (in this case, HOST.ORG), and in the drop-down menu, select CREATE A GPO IN THIS DOMAIN AND LINK IT HERE.

2. The NEW GPO panel pops up, so enter the name of the new GPO. In our case, our new GPO will be named “GPOfolderredirection”. The SOURCE STARTER GPO remains as “(none)”. Click OK – see Figure 17.


Step 18: Clear GPO Link-Enabled status


Expand
Right-click on the new GPOfolderredirection, and *un-check* the LINK-ENABLED item. Reason is, we’re not ready yet to actually use this new GPO. We will re-enable the link later. Make sure “Link Enabled” is not checked – see Figure 18.


Step 19: Remove Authenticated Users


Expand
1. Click on our new GPOfolderredirection – ignore the little popup “Group Policy Management Console” warning. The GPO settings will show up on the right side with a bunch of tabs: SCOPE, DETAILS, SETTINGS, and DELEGATION. Click on SCOPE if it is not already selected.

2. In the SECURITY FILTERING section, the user group listed is AUTHENTICATED USERS. Select this name, and REMOVE it.


Step 20: Replace Authenticated Users with our Security Group


Expand
Now click on ADD, and in the popup box, enter our GPO name “FRDsecurity” as the security group we want this GPO to apply to. Hit OK. See Figure 20


Step 21: Delegate Authenticated Users as READ-ONLY


Expand
Now go to the DELEGATION tab. At the bottom, click on the ADD tab, and in the popup, type AUTHENTICATED USERS. Click OK, and accept the default READ permission for Authenticated Users. Click OK and close the panel – see Figure 21


Step 22: Select User Folders to be Redirected


Expand
Now we identify which folder(s) of the roaming users will be shared – these are the standard folders like: APPDATA, DESKTOP, DOCUMENTS, DOWNLOADS, MUSIC, etc. We can select whichever folders are required by the client. For example, in some companies, the folders MUSIC, PICTURES and VIDEOS are not allowed because they are not relevant to the business and considered frivolous.

Also be aware that the more folders selected, the longer it will take to load the user’s profile when he/she logs on to and signs off from a Primary Computer.

1. Under GROUP POLICY MANAGEMENT, select our GPOfolderredirection object and right-click on EDIT to open the GROUP POLICY EDITOR. You should see two sections: USER CONFIGURATION and COMPUTER CONFIGURATION.

2. Click on USER CONFIGURATION, go to POLICIES, then go to WINDOWS SETTINGS, then FOLDER REDIRECTION. You will see a list of all possible user folders for a standard installation. Initially, none of these folders is set up for Roaming. One-by-one, we need to pick each folder we want, and set it up to be redirected to the FRDfileshares we previously set up – see Figure 22


Step 23: Set Target Path for Redirected Folder


Expand
So, let’s start with APPDATA, which contains the most recent settings of each user:

Right-click on APPDATA to open its PROPERTIES panel.

1. In the TARGET tab, select “Basic: Redirect everyone’s folder to the same location”

2. Further down, type in the UNC path to FRDfileshares folder: this looks like \\testbox.host.org\Shares\FRDfileshares$

3. NOTE: you must use the UNC syntax! See Figure 23.


Step 24: Edit Settings for Redirected Folder


Expand
1. Select the SETTINGS tab and verify the redirection settings are checked for these two items: 
• Grant the user exclusive rights to AppData (Roaming) 
• Move the contents of AppData (Roaming) to the new location 

2. Then, at the bottom POLICY REMOVAL box, the only button to be checked is the one that says: 
• Redirect the folder back to the localuser profile location when policy is removed

3. Hit APPLY and OK

4. Repeat step 3 for each folder the user needs. In our case, we repeated this tedious procedure for the following folders: 
a. AppData 
b. Desktop 
c. Start Menu 
d. Documents 
e. Downloads 
f. Favorites 
g. Music 
h. Pictures 
i. Videos


Step 25: Warning Popup


Expand
For each folder, you may get a warning about incompatibility with older operating systems. As long as you have Windows 10 systems, just ignore this warning and hit the YES button to continue. See Figure 25


Step 26: Redirect Folders on Primary Computers Only


Expand
Okay, now it’s time to modify the policy rules to achieve what we want.

1. Go to GROUP POLICY MANAGEMENT, click on our GPOfolderredirection policy, right-click and select EDIT. Make sure you see both the COMPUTER CONFIGURATION and USER CONFIGURATION items.

2. Go to COMPUTER CONFIGURATION, then to POLICIES, then to ADMINISTRATIVE TEMPLATES, then to SYSTEM, and then to FOLDER REDIRECTION. 

a. On the right panel, select the policy REDIRECT FOLDERS ON PRIMARY COMPUTERS ONLY, click on EDIT, and click on the button ENABLED. Hite APPLY and then OK. Make sure the state = ENABLED before leaving this panel. See Figure 26 


Step 27: Download Roaming Profiles on Primary Computers Only


Expand
Stay in COMPUTER CONFIGURATION, then to POLICIES, then to ADMINISTRATIVE TEMPLATES, then to SYSTEM, then scroll down to USER PROFILES, then click EDIT.

• On the right panel, select the policy DOWNLOAD ROAMING PROFILES ON PRIMARY COMPUTERS ONLY, and ENABLE it. Make sure the state = ENABLED before leaving this panel. See Figure 27.


Step 28: Redirect Roaming User Folders on Primary Computers Only


Expand
Go to USER CONFIGURATION, then to POLICIES, then to ADMINISTRATIVE TEMPLATES, then to SYSTEM, and then to FOLDER REDIRECTION. 

• On the right panel, select the policy REDIRECT FOLDERS ON PRIMARY COMPUTERS ONLY, click on EDIT, and ENABLE it. Make sure the state = ENABLED before leaving this panel. See Figure 28.


Step 29: Enable our GPO


Expand
Finally, go back to GROUP POLICY MANAGEMENT, right-click on our “GPOfolderredirection” policy, and click on LINK ENABLED to actually enable this policy


Step 30: Ensure each Roaming User’s Profile points to the Redirected File Shares


Expand
To be sure the roaming user is using the profile stored in the network file shares and not on any local computer, we need to do one more thing for each roaming user.

1. Open ACTIVE DIRECTORY USERS AND COMPUTERS, and select the roaming user. Right-click and click PROPERTIES.

2. Click on the MEMBERS OF tab to verify this user is a member of the FRDsecurity group

3. Click on the PROFILE tab, and in the USER PROFILE section, in the PROFILE PATH box, type in the path to the file shares. See Figure 30. In our case, this is the UNC path: 

\\TESTBOX.HOST.ORG\FRDfileshares$\aalpha

4. Click APPLY and OK.


Step 31: Repeat this process for all the Roaming Users


Expand
Be sure that all roaming users have their profile path defined.

At this point, you can (as an option) Reboot the Server…. I just do it as a habit after making important Windows edits.


Step 32: Test the process


Expand
1. Sit down at any one of the designated Primary Computers, let’s try computer=AAAA, and log in as a Roaming User – in our case, it would be either user “aalpha” or “bbeta”.

a. If the Roaming User has never logged into this PC before, then he/she will be greeted with the usual new user setup: “Hi – We’re setting things up for you – This won’t take long – etc.” Be patient!

2. If the computer has not yet been rebooted, it doesn’t know about the policies we just created, so we may have to force a policy update on that computer. 

a. To do this, open a Command prompt as Administrator, and type: 

gpupdate /force 

b. This will force a Group Policy Update on this particular computer; other Primary Computers may also need this boost. The computer will reboot, and you will need to log in again as the roaming user.

3. Verify that the roaming user is actually “roaming”. Open the Primary Computer’s Control Panel, go to SYSTEM AND SECURITY, then to SYSTEM, and in the left menu, click on ADVANCED SYSTEM SETTINGS. (You will need to log in as the Administrator). In the ADVANCED tab, under USER PROFILES, click the SETTINGS button. In the USER PROFILES box, you should see the roaming user (in this case “aalpha” = Andy Alpha) listed with Type=Roaming and Status=Roaming.

4. Create a few temporary test files in one or more directories, then SIGN OUT of that computer. 
a. Note: It is very important for each Roaming User to SIGN OUT, otherwise changes will not be saved on the network file share

5. Now go to another computer designated as a Primary Computer, say computer=BBBB, and log in as the same Roaming User. 
a. If we followed all these instructions, then all the user’s Desktop, Documents, etc. should appear, including all the temporary test files just created.


Step 33: Verify Status of Roaming User




Again, verify that the roaming user is actually “roaming” for this computer BBBB. Open the Primary Computer’s Control Panel, go to SYSTEM AND SECURITY, then to SYSTEM, and in the left menu, click on ADVANCED SYSTEM SETTINGS. (You will need to log in as the Administrator). In the ADVANCED tab, under USER PROFILES, click the SETTINGS button. In the USER PROFILES box, you should see the roaming user (in this case “aalpha”) listed with Type=Roaming and Status=Roaming. See Figure 33.

Once the basic profile has been set up, it can “roam” to any other Primary Computer without going through the Windows profile default setup again.


Step 34: Verify Network File Sharing




At this point, we can observe that the Redirected Folders have been updated and contain the profiles of our two roaming users. On our test server – TESTBOX.HOST.ORG – use File Explorer to open the J:\ROAMERS partition that we created for the network file shares, and now see the profiles for each roaming user – the path is: J:\Shares\FRDfileshares$ - see Figure 34.

IMPORTANT: If you view your network file shares listing and see profiles for user.v5 and user.v6, that v5 is used by the older Windows 10 Anniversary Edition, and the v6 is used by the newer Windows 10 Creators Edition. They are not compatible, and Roaming Users won't work. So, make sure your Primary Computers all have the same version of Windows 10.


I’ve used this sequence in many different environments, and it always works perfectly. So I hope it also works for you.

The sequence can tolerate minor adjustments, such as defining Roaming Users and Primary Computers early in the process, but the steps I’ve presented have been successfully implemented many times.

Another approach would be to use Virtual Machines, but that requires larger image files because the entire operating system and application program suite would be moved around. By using Microsoft’s Folder Redirection/Roaming Users approach, only the roaming user profile data (e.g., Desktop, Documents, Pictures, etc.) would be moved.

Author: Angelo A Vitale
Last update: 2018-12-11 12:44


Pwd-Last-Set attribute

Pwd-Last-Set attribute
The date and time that the password for this account was last changed. This value is stored as a large integer that represents the number of 100 nanosecond intervals since January 1, 1601 (UTC). If this value is set to 0 and the User-Account-Control attribute does not contain the UF_DONT_EXPIRE_PASSWD flag, then the user must set the password at the next logon.

CN Pwd-Last-Set
Ldap-Display-Name pwdLastSet
Size 8 bytes
Update Privilege This value is set by the system.
Update Frequency Each time the password is changed.
Attribute-Id 1.2.840.113556.1.4.96
System-Id-Guid bf967a0a-0de6-11d0-a285-00aa003049e2
Syntax Interval

https://msdn.microsoft.com/en-us/library/ms679430(v=vs.85).aspx

Author: Angelo A Vitale
Last update: 2018-12-11 12:51


Replicate folder targets using DFS Replication

Applies to: Windows Server (Semi-Annual Channel), Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008

You can use DFS Replication to keep the contents of folder targets in sync so that users see the same files regardless of which folder target the client computer is referred to.

To replicate folder targets using DFS Replication

  1. Click Start, point to Administrative Tools, and then click DFS Management.
  2. In the console tree, under the Namespaces node, right-click a folder that has two or more folder targets, and then click Replicate Folder.
  3. Follow the instructions in the Replicate Folder Wizard.

Note

Configuration changes are not applied immediately to all members except when using the Suspend-DfsReplicationGroup and Sync-DfsReplicationGroup cmdlets. The new configuration must be replicated to all domain controllers, and each member in the replication group must poll its closest domain controller to obtain the changes. The amount of time this takes depends on the Active Directory Directory Services (AD DS) replication latency and the long polling interval (60 minutes) on each member. To poll immediately for configuration changes, open a Command Prompt window and then type the following command once for each member of the replication group: 
dfsrdiag.exe PollAD /Member:DOMAIN\Server1 
To do so from a Windows PowerShell session, use the Update-DfsrConfigurationFromAD cmdlet, which was introduced in Windows Server 2012 R2.


See also

Author: Angelo A Vitale
Last update: 2018-12-11 12:52


Step by Step : Deploy DFS in Windows Server 2012 R2

Today, let’s go through a step by step on how to deploy Distributed File System (DFS) in Wndows Server 2012 R2, but before we start, you should know what is DFS all about.

What Is DFS?

Normally for domain users, to access a file share, they might use Universal Naming Convention (UNC) name to access the shared folder content.

Many large company have 100 of file servers that are dispersed geographically throughout an organization.

This is very challenging for users who are trying to find and access files efficiently.

So by using a namespace, DFS can simplify the UNC folder structure. In addition, DFS can replicate the virtual namespace and the shared folders to multiple servers within the organization. This can ensure that the shares are located as close as possible to users, thereby providing an additional benefit of fault tolerance for the network shares.

Orait, that’s a just a bit of DFS introduction, for more information, please do refer to http://technet.microsoft.com/en-us/library/jj127250.aspx, or for those who interested to “feel” the hands-on on the DFS, please do join my Server 2012 training, please refer to my website for more information : http://compextrg.com/

So, enough said, lets get started with our DFS deployment.

** as usual, for this DFS demo, I’m using 3 server 2012 (DC01, SVR01, COMSYS-RODC01) and Window Client (Surface01).

1

** I will install DFS into SVR01 and COMSYS-RODC01 Server

1 – Always be aware that to deploy DFS you need 2 Servers so that the Folder will replicate each other, so I will install DFS into SVR01 and COMSYS-RODC01 server, you can install DFS simultaneously.

To install DFS in Svr01 server, open Server Manager, on the Dashboard click Add Roles and Features

2

2 – In the Before you begin box, click Next

3

3 – On the Select installation type box, click Next to proceed (make sure Role-based or feature-based installation is selected)…

4

4 – On the Select destination server box, click Next to proceed…

5

5 – On the Select server roles page, expand File and Storage Services, expand File and iSCSI Services, and then select the DFS Namespaces check box, in the Add Roles and Features pop-up box, click Add Features…

6

6 – Next, make sure you select the DFS Replication check box, and then only click next to proceed…

7

7 – Next, on the Select features box, click Next

8

8 – On the Confirm installation selections box, click Install

9

9 – Wait for few minutes for the installation to complete and when the installation completes, click close…

10

11

** As I mentioned previously, you need to install DFS in another server also which is in my demo is a COMSYS-RODC01 server…

** Once you confirm both of the Server has been installed with DFS, please proceed with DFS namespace configuration.

10 – 1st, open DFS Management from Server Manager…

12

11 – Next, on the DFS console, right-click Namespaces, and then click New Namespace (A namespace is a virtual view of shared folders in your server)…

13

12 – In the New Namespace Wizard, on the Namespace Server page, under Server, type svr01, and then click Next…

14

13 – Next, on the Namespace Name and Settings box, under Name, type MarketingDocs, and then click Edit Settings…

15

14 – In the Edit Settings box, under Local Path of shared folder: type C:\DFSRoots\MarketingDocs and select Administrator have full access; other users have read and write permissions, then click OK…

16

15 – Next, on the Namespace Type box, verify that Domain-based namespace is selected. Take note that the namespace will be accessed by \\comsys.local\MarketingDocs, ensure also that the Enable Windows Server 2008 mode check box is selected, and then click Next…

17

16 – On the Review Settings and Create Namespace page, click Create

18

17 – On the Confirmation box, verify that the Create namespace task is successful, and then click Close…

19

18 – Next, you need to enable access-based enumeration for the MarketingDocs namespace.

To do so, under Namespaces, right-click \\comsys.local\MarketingDocs, and then click Properties…

20

19 – In the \\comsys.local\MarketingDocs Properties box, click the Advanced tab, then select the Enable access-based enumeration for this namespace check box, and then click OK…

21

20 – Next, let’s add the Brochures folder to the MarketingDocs namespace…

To do that, right-click \\comsys.local\MarketingDocs , and then click New Folder

22

21 – In the New Folder box, under Name, type Brochures then click Add…

24

22 – In the Add Folder Target dialog box, type \\comsys-rodc01\Brochures, and then click OK…

25

23 – In the Warning box, click Yes

26

24 – In the Create Share box, in the Local path of shared folder box, type C:\MarketingDocs\Brochures, and select Administrator have full access; other users have read and write permissions, then click OK…

27

25 – In the Warning box, click Yes to proceed…

28

26 – Click OK again to close the New Folder dialog box…

29

27 – Next, I want to add the OnlineAdvert folder to the MarketingDocs namespace, so to do that, right-click \\comsys.local\MarketingDocs, and click New Folder, then In the New Folder box, under Name, type OnlineAdvert, and then, click Add…

30

28 – In the Add Folder Target box, type \\svr01\OnlineAdvert, and then click OK…

31

29 -In the Warning box, click Yes to create OnlineAdvert folder

32

30 – Next, in the Create Share box, in the Local path of shared folder box, type C:\MarketingDocs\OnlineAdvert, make sure also you select Administrator have full access; other users have read and write permissions, then click OK…

33

31 – In the Warning box, click Yes

34



32 – Click OK again to close the New Folder dialog box (verify that \\svr0\OnlineAdvert is listed) and also Brochures and OnlineAdvert folder is listed under \\comsys.local\MarketingDocsnamespaces…

35



36



33 – Now lets verify our MarketingDocs namespace and its folder can be access using UNC, open RUN and type \\comsys.local\MarketingDocs, then in the MarketingDocs window, verify that both Brochures and OnlineAdvert is display.

37



34 – Now is the the second important task which is to configure DFS replication (DFS-R), but before that, why don’t we to create another folder target for Brochures…

Right-click Brochures, and then click Add Folder Target…

38



35 – In the New Folder Target box, under Path to folder target, type \\svr01\Brochures, and then click OK…

39



36 – In the Warning box, click Yes to create the shared folder on svr01 server…

40

37 – Next, in the Create Share box, under Local path of shared folder, type C:\MarketingDocs\Brochures, don’t forget to select Administrator have full access; other users have read and write permissions, then click OK…

41

38 – In the Warning box, click Yes to create the folder on svr01 server…

42

39 – In the Replication box, click Yes. The Replicate Folder Wizard starts…

43

40 – Next, in the Replicate Folder Wizard, on both the Replication Group and Replicated Folder Name page, accept the default settings, and then click Next…

44



41 – On the Replication Eligibility page, click Next

45



42 – On the Primary Member box, I choose SVR01 server to be my Primary DFS server, and then click Next…

46



43 – On the Topology Selection box, select Full Mesh, and then click Next…

47

44 – On the Replication Group Schedule and Bandwidth, I choose Full and then click next…

48



45 – On the Review Settings and Create Replication Group box, click Create

49



46 – On the Confirmation box, click Close (verify that all status is Success)…

50



47 – In the Replication Delay box, click OK…

51



48 – Next, expand Replication, and then click comsys.local\marketingdocs\brochures, on the right pane, under Memberships tab, verify that both comsys-rodc01 and svr01 server is listed….

52



49 – To make sure all replication process is running without any issue and also to verify that our second server which is COMSYS-RODC01 server is having same function on DFS, log on into COMSYS-RODC01 server, open DFS and right click namespace and click Add Namespace to Display…

53



50 – In the Add Namespace to Display box, verify that domain is Comsys.local and under Namespace:, \\Comsys.local\MarketingDocs is listed and then click OK…

54



51 – Next, in the DFS console on the Comsys-RODC01 server, you should see that both Brochures and OnlineAdvert folder is listed…

55



52 – Lastly, log on into your client PC as any domain users, open RUN and type \\Comsys.local\MarketingDocs and press enter, and you should notice that marketingdocs folder is pop up with Brochures and OnlineAdvert folder is inside…

56

We done for now, as at this configuration, you now can start using DFS, but we still have few thing to verify especially on the High Availability.

Author: Angelo A Vitale
Last update: 2018-12-11 12:59


Windows Cluster for Windows 2000 reference only

Step-by-Step Guide to Installing Cluster Service

This step-by-step guide provides instructions for installing Cluster service on servers running the Windows® 2000 Advanced Server and Windows 2000 Datacenter Server operating systems. The guide describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications. Rather, it guides you through the process of installing a typical, two-node cluster itself.

On This Page

Introduction 
Checklists for Cluster Server Installation 
Cluster Installation 
Install Cluster Service software 
Verify Installation 
For Additional Information 
Appendix A 

Introduction

A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows® 2000 Advanced Server or Windows 2000 Datacenter Server.

The purpose of server clusters is to preserve client access to applications and resources during failures and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resources and applications move to another available cluster node.

For clustered systems, the term high availability is used rather than fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a clustering solution because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMs), or stock exchanges.

While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within a cluster, and improves overall availability of the system.

Cluster service benefits include:

  • High Availability. With Cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
  • Failback. Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
  • Manageability. You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster by dragging and dropping cluster objects. You can move data to different servers in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
  • Scalability. Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.

This paper provides instructions for installing Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical, two-node cluster itself.

Checklists for Cluster Server Installation

This checklist assists you in preparing for installation. Step-by-step instructions begin after the checklist.

Software Requirements

  • Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all computers in the cluster.
  • A name resolution method such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
  • Terminal Server to allow remote cluster administration is recommended.

Hardware Requirements

  • The hardware for a Cluster service node must meet the hardware requirements for Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These requirements can be found at The Product Compatibility Search page
  • Cluster hardware must be on the Cluster Service Hardware Compatibility List (HCL). The latest version of the Cluster Service HCL can be found by going to the Windows Hardware Compatibility List and then searching on Cluster.

    Two HCL-approved computers, each with the following:

    • A boot disk with Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. The boot disk cannot be on the shared storage bus described below.
    • A separate PCI storage host adapter (SCSI or Fibre Channel) for the shared disks. This is in addition to the boot disk adapter.
    • Two PCI network adapters on each machine in the cluster.
    • An HCL-approved external disk storage unit that connects to all computers. This will be used as the clustered disk. A redundant array of independent disks (RAID) is recommended.
    • Storage cables to attach the shared storage device to all computers. Refer to the manufacturers' instructions for configuring storage devices. If an SCSI bus is used, see Appendix A for additional information.
    • All hardware should be identical, slot for slot, card for card, for all nodes. This will make configuration easier and eliminate potential compatibility problems.

Network Requirements

  • A unique NetBIOS cluster name.
  • Five unique, static IP addresses: two for the network adapters on the private network, two for the network adapters on the public network, and one for the cluster itself.
  • A domain user account for Cluster service (all nodes must be members of the same domain).
  • Each node should have two network adapters—one for connection to the public network and the other for the node-to-node private cluster network. If you use only one network adapter for both connections, your configuration is unsupported. A separate private network adapter is required for HCL certification.

Shared Disk Requirements:

  • All shared disks, including the quorum disk, must be physically attached to a shared bus.
  • Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Please refer to the manufacturer's documentation for adapter-specific instructions.
  • SCSI devices must be assigned unique SCSI identification numbers and properly terminated, as per manufacturer's instructions.
  • All shared disks must be configured as basic (not dynamic).
  • All partitions on the disks must be formatted as NTFS.

While not required, the use of fault-tolerant RAID configurations is strongly recommended for all disks. The key concept here is fault-tolerant raid configurations—not stripe sets without parity.

Cluster Installation

Installation Overview

During the installation process, some nodes will be shut down and some nodes will be rebooted. These steps are necessary to guarantee that the data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster software.

Use Table 1 below to determine which nodes and storage devices should be powered on during each step.

The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, you can use the Node 2 column to determine the required state of other nodes.

Table 1 Power Sequencing Table for Cluster Installation

Step

Node 1

Node 2

Storage

Comments

Setting Up Networks

On

On

Off

Verify that all storage devices on the shared bus are powered off. Power on all nodes.

Setting up Shared Disks

On

Off

On

Shutdown all nodes. Power on the shared storage, then power on the first node.

Verifying Disk Configuration

Off

On

On

Shut down first node, power on second node. Repeat for nodes 3 and 4 if necessary.

Configuring the First Node

On

Off

On

Shutdown all nodes; power on the first node.

Configuring the Second Node

On

On

On

Power on the second node after the first node was successfully configured. Repeat for nodes 3 and 4 if necessary.

Post-installation

On

On

On

At this point all nodes should be on.

Several steps must be taken prior to the installation of the Cluster service software. These steps are:

  • Installing Windows 2000 Advanced Server or Windows 2000 Datacenter Server on each node.
  • Setting up networks.
  • Setting up disks.

Perform these steps on every cluster node before proceeding with the installation of Cluster service on the first node.

To configure the Cluster service on a Windows 2000-based server, your account must have administrative permissions on each node. All nodes must be member servers, or all nodes must be domain controllers within the same domain. It is not acceptable to have a mix of domain controllers and member servers in a cluster.

Installing the Windows 2000 Operating System

Please refer to the documentation you received with the Windows 2000 operating system packages to install the system on each node in the cluster.

This step-by-step guide uses the naming structure from the "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment" http://www.microsoft.com/windows2000/techinfo/planning/server/serversteps.asp. However, you can use any names.

You must be logged on as an administrator prior to installation of Cluster service.

Setting up Networks

Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage devices at the same time until the Cluster service is installed on at least one node and that node is online.

Each cluster node requires at least two network adapters—one to connect to a public network, and one to connect to a private network consisting of cluster nodes only.

The private network adapter establishes node-to-node communication, cluster status signals, and cluster management. Each node's public network adapter connects the cluster to the public network where clients reside.

Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. The connections are illustrated in Figure 1 below. Run these steps on each cluster node before proceeding with shared disk setup.

Figure 1: Example of two-node cluster (clusterpic.vsd)

Figure 1: Example of two-node cluster (clusterpic.vsd)
Configuring the Private Network Adapter

Perform these steps on the first node in your cluster.

  1. Right-click My Network Places and then click Properties.
  2. Right-click the Local Area Connection 2 icon.

    Note: Which network adapter is private and which is public depends upon your wiring. For the purposes of this document, the first network adapter (Local Area Connection) is connected to the public network, and the second network adapter (Local Area Connection 2) is connected to the private cluster network. This may not be the case in your network.
  3. Click Status. The Local Area Connection 2 Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
  4. Right-click Local Area Connection 2 again, click Properties, and click Configure.
  5. Click Advanced. The window in Figure 2 should appear.
  6. Network adapters on the private network should be set to the actual speed of the network, rather than the default automated speed selection. Select your network speed from the drop-down list. Do not use an Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click the appropriate option such as Media Type or Speed.

    Bb727114.cluste02(en-us,TechNet.10).gif

    Figure 2: Advanced Adapter Configuration (advanced.bmp)
    All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex ModeFlow ControlMedia Type, and so on. These settings should remain the same even if the hardware is different.

    Note: We highly recommend that you use identical network adapters throughout the cluster network.
  7. Click Transmission Control Protocol/Internet Protocol (TCP/IP).
  8. Click Properties.
  9. Click the radio-button for Use the following IP address and type in the following address: 10.1.1.1. (Use 10.1.1.2 for the second node.)
  10. Type in a subnet mask of 255.0.0.0.
  11. Click the Advanced radio button and select the WINS tab. Select Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Do this step for the private network adapter only.

    The window should now look like Figure 3 below.

    Bb727114.cluste03(en-us,TechNet.10).gif

    Figure 3: Private Connector IP Address (ip10111.bmp)

Configuring the Public Network Adapter

Note: While the public network adapter's IP address can be automatically obtained if a DHCP server is available, this is not recommended for cluster nodes. We strongly recommend setting static IP addresses for all network adapters in the cluster, both private and public. If IP addresses are obtained via DHCP, access to cluster nodes could become unavailable if the DHCP server goes down. If you must use DHCP for your public network adapter, use long lease periods to assure that the dynamically assigned lease address remains valid even if the DHCP service is temporarily lost. In all cases, set static IP addresses for the private network connector. Keep in mind that Cluster service will recognize only one network interface per subnet. If you need assistance with TCP/IP addressing in Windows 2000, please see Windows 2000 Online Help.

Rename the Local Area Network Icons

We recommend changing the names of the network connections for clarity. For example, you might want to change the name of Local Area Connection (2) to something like Private Cluster Connection. The naming will help you identify a network and correctly assign its role.

  1. Right-click the Local Area Connection 2 icon.
  2. Click Rename.
  3. Type Private Cluster Connection into the textbox and press Enter.
  4. Repeat steps 1-3 and rename the public network adapter as Public Cluster Connection.

    Bb727114.cluste04(en-us,TechNet.10).gif

    Figure 4: Renamed connections (connames.bmp)
  5. The renamed icons should look like those in Figure 4 above. Close the Networking and Dial-up Connections window. The new connection names automatically replicate to other cluster servers as they are brought online.

Verifying Connectivity and Name Resolution

To verify that the private and public networks are communicating properly, perform the following steps for each network adapter in each node. You need to know the IP address for each network adapter in the cluster. If you do not already have this information, you can retrieve it using the ipconfig command on each node:

  1. Click Start, click Run and type cmd in the text box. Click OK.
  2. Type ipconfig /all and press Enter. IP information should display for all network adapters in the machine.
  3. If you do not already have the command prompt on your screen, click Start, click Run and typing cmdin the text box. Click OK.
  4. Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node. For example, assume that the IP addresses are set as follows:

    Node

    Network Name

    Network Adapter IP Address

    1

    Public Cluster Connection

    172.16.12.12.

    1

    Private Cluster Connection

    10.1.1.1

    2

    Public Cluster Connection

    172.16.12.14

    2

    Private Cluster Connection

    10.1.1.2

In this example, you would type ping 172.16.12.14and ping 10.1.1.2 from Node 1, and you would type ping 172.16.12.12 and 10.1.1.1 from Node 2.

To verify name resolution, ping each node from a client using the node's machine name instead of its IP number. For example, to verify name resolution for the first cluster node, type ping hq-res-dc01 from any client.

Verifying Domain Membership

All nodes in the cluster must be members of the same domain and able to access a domain controller and a DNS Server. They can be configured as member servers or domain controllers. If you decide to configure one node as a domain controller, you should configure all other nodes as domain controllers in the same domain as well. In this document, all nodes are configured as domain controllers.

Note: See More Information at the end of this document for links to additional Windows 2000 documentation that will help you understand and configure domain controllers, DNS, and DHCP.

  1. Right-click My Computer, and click Properties.
  2. Click Network Identification. The System Properties dialog box displays the full computer name and domain. In our example, the domain name is reskit.com.
  3. If you are using member servers and need to join a domain, you can do so at this time. Click Properties and following the on-screen instructions for joining a domain.
  4. Close the System Properties and My Computerwindows.

Setting Up a Cluster User Account

The Cluster service requires a domain user account under which the Cluster service can run. This user account must be created before installing Cluster service, because setup requires a user name and password. This user account should not belong to a user on the domain.

  1. Click Start, point to Programs, point to Administrative Tools, and click Active Directory Users and Computers
  2. Click the + to expand Reskit.com (if it is not already expanded).
  3. Click Users.
  4. Right-click Users, point to New, and click User.
  5. Type in the cluster name as shown in Figure 5 below and click Next.

    Bb727114.cluste05(en-us,TechNet.10).gif

    Figure 5: Add Cluster User (clusteruser.bmp)
  6. Set the password settings to User Cannot Change Password and Password Never Expires. Click Next and then click Finish to create this user.

    Note: If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration.
  7. Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in. Select Properties from the context menu.
  8. Click Add Members to a Group.
  9. Click Administrators and click OK. This gives the new user account administrative privileges on this computer.
  10. Close the Active Directory Users and Computers snap-in.

Setting Up Shared Disks

Warning: Make sure that Windows 2000 Advanced Server or Windows 2000 Datacenter Server and the Cluster service are installed and running on one node before starting an operating system on another node. If the operating system is started on other nodes before the Cluster service is installed, configured and running on at least one node, the cluster disks will probably be corrupted.

To proceed, power off all nodes. Power up the shared storage devices and then power up node one.

About the Quorum Disk

The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster. We make the following quorum disk recommendations:

  • Create a small partition (min 50MB) to be used as a quorum disk. We generally recommend a quorum disk to be 500MB.)
  • Dedicate a separate disk for a quorum resource. As the failure of the quorum disk would cause the entire cluster to fail, we strongly recommend you use a volume on a RAID disk array.

During the Cluster service installation, you must provide the drive letter for the quorum disk. In our example, we use the letter Q.

Configuring Shared Disks

  1. Right click My Computer, click Manage, and click Storage.
  2. Double-click Disk Management
  3. Verify that all shared disks are formatted as NTFSand are designated as Basic. If you connect a new drive, the Write Signature and Upgrade Disk Wizard starts automatically. If this happens, click Nextto go through the wizard. The wizard sets the disk to dynamic. To reset the disk to Basic, right-click Disk # (where # specifies the disk you are working with) and click Revert to Basic Disk.

    Right-click unallocated disk space

    1. Click Create Partition…
    2. The Create Partition Wizard begins. Click Next twice.
    3. Enter the desired partition size in MB and click Next.
    4. Accept the default drive letter assignment by clicking Next.
    5. Click Next to format and create partition.

Assigning Drive Letters

After the bus, disks, and partitions have been configured, drive letters must be assigned to each partition on each clustered disk.

Note: Mountpoints is a feature of the file system that allows you to mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on clusters. Any external disk used as a cluster resource must be partitioned using NTFS partitions and must have a drive letter assigned to it.

  1. Right-click the desired partition and select Change Drive Letter and Path.
  2. Select a new drive letter.
  3. Repeat steps 1 and 2 for each shared disk.

    Bb727114.cluste06(en-us,TechNet.10).gif

    Figure 6: Disks with Drive Letters Assigned (drives.bmp)
  4. When finished, the Computer Managementwindow should look like Figure 6 above. Now close the Computer Management window.

Verifying Disk Access and Functionality

  1. Click Start, click Programs, click Accessories, and click Notepad.
  2. Type some words into Notepad and use the File/Save As command to save it as a test file called test.txt. Close Notepad.
  3. Double-click the My Documents icon.
  4. Right-click test.txt and click Copy
  5. Close the window.
  6. Double-click My Computer.
  7. Double-click a shared drive partition.
  8. Click Edit and click Paste.
  9. A copy of the file should now reside on the shared disk.
  10. Double-click test.txt to open it on the shared disk. Close the file.
  11. Highlight the file and press the Del key to delete it from the clustered disk.

Repeat the process for all clustered disks to verify they can be accessed from the first node.

At this time, shut down the first node, power on the second node and repeat the Verifying Disk Access and Functionality steps above. Repeat again for any additional nodes. When you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide.

Install Cluster Service software

Configuring the First Node

Note: During installation of Cluster service on the first node, all other nodes must either be turned off, or stopped prior to Windows 2000 booting. All shared storage devices should be powered up.

In the first phase of installation, all initial cluster configuration information must be supplied so that the cluster can be created. This is accomplished using the Cluster Service Configuration Wizard.

  1. Click Start, click Settings, and click Control Panel.
  2. Double-click Add/Remove Programs.
  3. Double-click Add/Remove Windows Components .
  4. Select Cluster Service. Click Next.
  5. Cluster service files are located on the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x:\i386 (where x is the drive letter of your CD-ROM). If Windows 2000 was installed from a network, enter the appropriate network path instead. (If the Windows 2000 Setup flashscreen displays, close it.) Click OK.
  6. Click Next.
  7. The window shown in Figure 7 below appears. Click I Understand to accept the condition that Cluster service is supported on hardware from the Hardware Compatibility List only.

    Bb727114.cluste07(en-us,TechNet.10).gif

    Figure 7: Hardware Configuration Certification Screen (hcl.bmp)
  8. Because this is the first node in the cluster, you must create the cluster itself. Select The first node in the cluster, as shown in Figure 8 below and then click Next.

    Bb727114.cluste08(en-us,TechNet.10).gif

    Figure 8: Create New Cluster (clustcreate.bmp)
  9. Enter a name for the cluster (up to 15 characters), and click Next. (In our example, we name the cluster MyCluster.)
  10. Type the user name of the cluster service account that was created during the pre-installation. (In our example, this user name is cluster.) Leave the password blank. Type the domain name, and click Next.

    Note: You would normally provide a secure password for this user account.

    At this point the Cluster Service ConfigurationWizard validates the user account and password.
  11. Click Next.

Configuring Cluster Disks

Note: By default, all SCSI disks not residing on the same bus as the system disk will appear in the Managed Disks list. Therefore, if the node has multiple SCSI buses, some disks may be listed that are not to be used as shared storage (for example, an internal SCSI drive.) Such disks should be removed from the Managed Disks list.

  1. The Add or Remove Managed Disks dialog box shown in Figure 9 specifies which disks on the shared SCSI bus will be used by Cluster service. Add or remove disks as necessary and then click Next.

    Bb727114.cluste09(en-us,TechNet.10).gif

    Figure 9: Add or Remove Managed Disks (manageddisks.bmp)
    Note that because logical drives F: and G: exist on a single hard disk, they are seen by Cluster service as a single resource. The first partition of the first disk is selected as the quorum resource by default. Change this to denote the small partition that was created as the quorum disk (in our example, drive Q). Click Next.

    Note: In production clustering scenarios you must use more than one private network for cluster communication to avoid having a single point of failure. Cluster service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. You can also use a public network for cluster management, or you can use a mixed network for both private and public communications. In any case, make sure at least two networks are used for cluster communication, as using a single network for node-to-node communication represents a potential single point of failure. We recommend that multiple networks be used, with at least one network configured as a private link between nodes and other connections through a public network. If you have more than one private network, make sure that each uses a different subnet, as Cluster service recognizes only one network interface per subnet.

    This document is built on the assumption that only two networks are in use. It shows you how to configure these networks as one mixed and one private network.

    The order in which the Cluster Service Configuration Wizard presents these networks may vary. In this example, the public network is presented first.
  2. Click Next in the Configuring Cluster Networksdialog box.
  3. Make sure that the network name and IP address correspond to the network interface for the public network.
  4. Check the box Enable this network for cluster use.
  5. Select the option All communications (mixed network) as shown in Figure 10 below.
  6. Click Next.

    Bb727114.cluste10(en-us,TechNet.10).gif

    Figure 10: Public Network Connection (pubclustnet.bmp)
  7. The next dialog box shown in Figure 11 configures the private network. Make sure that the network name and IP address correspond to the network interface used for the private network.
  8. Check the box Enable this network for cluster use.
  9. Select the option Internal cluster communications only.

    Bb727114.cluste11(en-us,TechNet.10).gif

    Figure 11: Private Network Connection (privclustnet.bmp)
  10. Click Next.
  11. In this example, both networks are configured in such a way that both can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the networks are used. Because Private Cluster Connection represents a direct connection between nodes, it is left at the top of the list. In normal operation this connection will be used for cluster communication. In case of the Private Cluster Connection failure, cluster service will automatically switch to the next network on the list—in this case Public Cluster Connection. Make sure the first connection in the list is the Private Cluster Connection and click Next.

    Important: Always set the order of the connections so that the Private Cluster Connection is first in the list.
  12. Enter the unique cluster IP address(172.16.12.20) and Subnet mask (255.255.252.0), and click Next.

    Bb727114.cluste12(en-us,TechNet.10).gif

    Figure 12: Cluster IP Address (clusterip.bmp)
    The Cluster Service Configuration Wizard shown in Figure 12 automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network.
  13. Click Finish to complete the cluster configuration on the first node.

    The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster service. After the files are copied, the Cluster service registry entries are created, the log files on the quorum resource are created, and the Cluster service is started on the first node.

    A dialog box appears telling you that Cluster service has started successfully.
  14. Click OK.
  15. Close the Add/Remove Programs window.

Validating the Cluster Installation

Use the Cluster Administrator snap-in to validate the Cluster service installation on the first node.

  1. Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

    Bb727114.cluste13(en-us,TechNet.10).gif

    Figure 13: Cluster Administrator (1nodeadmin.bmp)
    If your snap-in window is similar to that shown above in Figure 13, your Cluster service was successfully installed on the first node. You are now ready to install Cluster service on the second node.

Configuring the Second Node

Note: For this section, leave node one and all shared disks powered on. Power up the second node.

Installing Cluster service on the second node requires less time than on the first node. Setup configures the Cluster service network settings on the second node based on the configuration of the first node.

Installation of Cluster service on the second node begins exactly as for the first node. During installation of the second node, the first node must be running.

Follow the same procedures used for installing Cluster service on the first node, with the following differences:

  1. In the Create or Join a Cluster dialog box, select The second or next node in the cluster, and click Next.
  2. Enter the cluster name that was previously created (in this example, MyCluster), and click Next.
  3. Leave Connect to cluster as unchecked. The Cluster Service Configuration Wizard will automatically supply the name of the user account selected during the installation of the first node. Always use the same account used when setting up the first cluster node.
  4. Enter the password for the account (if there is one) and click Next.
  5. At the next dialog box, click Finish to complete configuration.
  6. The Cluster service will start. Click OK.
  7. Close Add/Remove Programs.

If you are installing additional nodes, repeat these steps to install Cluster service on all other nodes.

Top of page 

Verify Installation

There are several ways to verify a successful installation of Cluster service. Here is a simple one:

  1. Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

    Bb727114.cluste14(en-us,TechNet.10).gif

    Figure 14: Cluster Resources (clustadmin.bmp)
    The presence of two nodes (HQ-RES-DC01 and HQ-RES-DC02 in Figure 14 above) shows that a cluster exists and is in operation.
  2. Right Click the group Disk Group 1 and select the option Move. The group and all its resources will be moved to another node. After a short period of time the Disk F: G: will be brought online on the second node. If you watch the screen, you will see this shift. Close the Cluster Administrator snap-in.

Congratulations. You have completed the installation of Cluster service on all nodes. The server cluster is fully operational. You are now ready to install cluster resources like file shares, printer spoolers, cluster aware services like IIS, Message Queuing, Distributed Transaction Coordinator, DHCP, WINS, or cluster aware applications like Exchange or SQL Server.

For Additional Information

This guide covers a simple installation of Cluster service. For more articles and papers on Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Cluster service, see: The Windows 2000 Web site. For information on installing DHCP, Active Directory, and other services, see Windows 2000 Online Help, the Windows 2000 Planning and Deployment Guide, and the Windows 2000 Resource Kit.

Top of page 

Appendix A

This appendix is provided as a generic instruction set for SCSI drive installations. If the SCSI hard disk vendor's instructions conflict with the instructions here, always use the instructions supplied by the vendor".

The SCSI bus listed in the hardware requirements must be configured prior to installation of Cluster services. This includes:

  • Configuring the SCSI devices.
  • Configuring the SCSI controllers and hard disks to work properly on a shared SCSI bus.
  • Properly terminating the bus. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.

In addition to the information on the following pages, refer to the documentation from the manufacturer of the SCSI device or the SCSI specifications, which can be ordered from the American National Standards Institute (ANSI). The ANSI web site contains a catalog that can be searched for the SCSI specifications.

Configuring the SCSI Devices

Each device on the shared SCSI bus must have a unique SCSI ID. Since most SCSI controllers default to SCSI ID 7, part of configuring the shared SCSI bus will be to change the SCSI ID on one controller to a different SCSI ID, such as SCSI ID 6. If there is more than one disk that will be on the shared SCSI bus, each disk must also have a unique SCSI ID.

Some SCSI controllers reset the SCSI bus when they initialize at boot time. If this occurs, the bus reset can interrupt any data transfers between the other node and disks on the shared SCSI bus. Therefore, SCSI bus resets should be disabled if possible.

Terminating the Shared SCSI Bus

Y cables can be connected to devices if the device is at the end of the SCSI bus. A terminator can then be attached to one branch of the Y cable to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators the device may have.

Trilink connectors can be connected to certain devices. If the device is at the end of the bus, a trilink connector can be used to terminate the bus. This method of termination requires either disabling or removing any internal terminators the device may have.

Y cables and trilink connectors are the recommended termination methods, because they provide termination even when one node is not online.

Note: Any devices that are not at the end of the shared bus must have their internal termination disabled.

1 See Appendix A for information about installing and terminating SCSI devices.

1 See Appendix A for information about installing and terminating SCSI devices.

Dev centers

Learning resources

 

Community

Support

Programs


United States (English)


logo© 2017 MicrosoftWindows Cluster for Windows 2000 reference only
Last Updated 4 months ago

Microsoft Logo
Gray Pipe
Developer Network



search clear
MSDN LibraryPrint Export (0)

https://msdn.microsoft.com/en-us/library/bb727114.aspx

Step-by-Step Guide to Installing Cluster Service

This step-by-step guide provides instructions for installing Cluster service on servers running the Windows® 2000 Advanced Server and Windows 2000 Datacenter Server operating systems. The guide describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications. Rather, it guides you through the process of installing a typical, two-node cluster itself.

On This Page

Introduction 
Checklists for Cluster Server Installation 
Cluster Installation 
Install Cluster Service software 
Verify Installation 
For Additional Information 
Appendix A 


Introduction

A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows® 2000 Advanced Server or Windows 2000 Datacenter Server.

The purpose of server clusters is to preserve client access to applications and resources during failures and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resources and applications move to another available cluster node.

For clustered systems, the term high availability is used rather than fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a clustering solution because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMs), or stock exchanges.

While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within a cluster, and improves overall availability of the system.

Cluster service benefits include:

  • High Availability. With Cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
  • Failback. Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
  • Manageability. You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster by dragging and dropping cluster objects. You can move data to different servers in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
  • Scalability. Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.

This paper provides instructions for installing Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical, two-node cluster itself.

Top of page 

Checklists for Cluster Server Installation

This checklist assists you in preparing for installation. Step-by-step instructions begin after the checklist.

Software Requirements

  • Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all computers in the cluster.
  • A name resolution method such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
  • Terminal Server to allow remote cluster administration is recommended.

Hardware Requirements

  • The hardware for a Cluster service node must meet the hardware requirements for Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These requirements can be found at The Product Compatibility Search page
  • Cluster hardware must be on the Cluster Service Hardware Compatibility List (HCL). The latest version of the Cluster Service HCL can be found by going to the Windows Hardware Compatibility List and then searching on Cluster.

    Two HCL-approved computers, each with the following:

    • A boot disk with Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. The boot disk cannot be on the shared storage bus described below.
    • A separate PCI storage host adapter (SCSI or Fibre Channel) for the shared disks. This is in addition to the boot disk adapter.
    • Two PCI network adapters on each machine in the cluster.
    • An HCL-approved external disk storage unit that connects to all computers. This will be used as the clustered disk. A redundant array of independent disks (RAID) is recommended.
    • Storage cables to attach the shared storage device to all computers. Refer to the manufacturers' instructions for configuring storage devices. If an SCSI bus is used, see Appendix A for additional information.
    • All hardware should be identical, slot for slot, card for card, for all nodes. This will make configuration easier and eliminate potential compatibility problems.

Network Requirements

  • A unique NetBIOS cluster name.
  • Five unique, static IP addresses: two for the network adapters on the private network, two for the network adapters on the public network, and one for the cluster itself.
  • A domain user account for Cluster service (all nodes must be members of the same domain).
  • Each node should have two network adapters—one for connection to the public network and the other for the node-to-node private cluster network. If you use only one network adapter for both connections, your configuration is unsupported. A separate private network adapter is required for HCL certification.

Shared Disk Requirements:

  • All shared disks, including the quorum disk, must be physically attached to a shared bus.
  • Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Please refer to the manufacturer's documentation for adapter-specific instructions.
  • SCSI devices must be assigned unique SCSI identification numbers and properly terminated, as per manufacturer's instructions.
  • All shared disks must be configured as basic (not dynamic).
  • All partitions on the disks must be formatted as NTFS.

While not required, the use of fault-tolerant RAID configurations is strongly recommended for all disks. The key concept here is fault-tolerant raid configurations—not stripe sets without parity.

Cluster Installation

Installation Overview

During the installation process, some nodes will be shut down and some nodes will be rebooted. These steps are necessary to guarantee that the data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster software.

Use Table 1 below to determine which nodes and storage devices should be powered on during each step.

The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, you can use the Node 2 column to determine the required state of other nodes.

Table 1 Power Sequencing Table for Cluster Installation

Step

Node 1

Node 2

Storage

Comments

Setting Up Networks

On

On

Off

Verify that all storage devices on the shared bus are powered off. Power on all nodes.

Setting up Shared Disks

On

Off

On

Shutdown all nodes. Power on the shared storage, then power on the first node.

Verifying Disk Configuration

Off

On

On

Shut down first node, power on second node. Repeat for nodes 3 and 4 if necessary.

Configuring the First Node

On

Off

On

Shutdown all nodes; power on the first node.

Configuring the Second Node

On

On

On

Power on the second node after the first node was successfully configured. Repeat for nodes 3 and 4 if necessary.

Post-installation

On

On

On

At this point all nodes should be on.


Several steps must be taken prior to the installation of the Cluster service software. These steps are:

  • Installing Windows 2000 Advanced Server or Windows 2000 Datacenter Server on each node.
  • Setting up networks.
  • Setting up disks.

Perform these steps on every cluster node before proceeding with the installation of Cluster service on the first node.

To configure the Cluster service on a Windows 2000-based server, your account must have administrative permissions on each node. All nodes must be member servers, or all nodes must be domain controllers within the same domain. It is not acceptable to have a mix of domain controllers and member servers in a cluster.

Installing the Windows 2000 Operating System

Please refer to the documentation you received with the Windows 2000 operating system packages to install the system on each node in the cluster.

This step-by-step guide uses the naming structure from the "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment" http://www.microsoft.com/windows2000/techinfo/planning/server/serversteps.asp. However, you can use any names.

You must be logged on as an administrator prior to installation of Cluster service.

Setting up Networks

Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage devices at the same time until the Cluster service is installed on at least one node and that node is online.

Each cluster node requires at least two network adapters—one to connect to a public network, and one to connect to a private network consisting of cluster nodes only.

The private network adapter establishes node-to-node communication, cluster status

Author: Angelo A Vitale
Last update: 2018-12-11 13:05


Windows Server 2016 Evaluation: How to extend the Trial Period

BY  ON 8. AUGUST 2017 • ( 11 COMMENTS )
In this blog post I show how to extend your trial period to three years. The evaluation version of Windows Server 2012 / 2016 is valid for 180 days and you can convert your trial version to retail.



To explore Windows Server 2016 download it here:

https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016/

After installing, you can try it for 180 days. After 180 days you and your system will run into troubles for sure. But the good news is: You can extend the period.

Extending the Trial Period

First, take a look at your desktop. You should see the countdown in the corner down right.

Unbenannt.PNG

Or start PowerShell and run slmgr.

1 slmgr -dlv

Pay attention to the Timebased activation expiration and the Remaining Windows rearm count. You can rearm the period 6 times. (180 days * 6 = 3 years).

Unbenannt.PNG

When the period comes to an end, run slmgr -rearm to extend it by another 180 days.

1 slmgr -rearm

Unbenannt.PNG

Next restart your computer.

1 Restart-Computer

Once restarted, open PowerShell and check your settings.

1 slmgr -dli

Unbenannt.PNG

Important Note

The evaluation version may not be use for commercial purposes. Have fun playing with the Windows Server 2016 Evaluation Version!

By the way: You can do the same with Windows 10. But the Windows 10 Evaluation Version can be used only for 180 days in total.


https://sid-500.com/2017/08/08/windows-server-2016-evaluation-how-to-extend-the-trial-period/

Author: Angelo A Vitale
Last update: 2018-12-11 13:06


Creating a reboot schedule for your dedicated serve

 

Following the instructions below to set up your server to have a scheduled reboot.

These steps are for Windows 2003
1. Click Start. Navigate to All Programs > Accessories > System Tools > Scheduled Tasks > Add Scheduled Task. 
2. Click Next
3. Click Browse and locate shutdown.exe c:\windows\system32\shutdown.exe is where it should be located. 
4. Click Next. Name your task and select when you would like to reboot.
5. Click Next. Select your time. Enter your administrator name and password. Confirm your password.
6. Click Next. Select open advanced properties for this task when I click Finish. 
7. Under the task tab (in the run command section) copy and paste the following (make proper changes to the command accordingly):
c:\window\system32\shutdown.exe /r /f /m \\servername /t 60 /d p:4:1 /c "name of scheduled task"  
8. Click OK

Explanation of arguments in this command.
/r - Will Reboot the Server
/f - Force running applications to close without forewarning users.
/m - Is only used with \\servername and is handy if you want to reboot a remote server.
/t - Set the time-out period before shutdown to xxx seconds.
/d - Provides the reason for a shutdown. p:4:1 Will write to the event viewer "Application: Maintenance (Planned)"
/c -  Comment on the reason for the restart or shutdown.

These steps are for Windows 2008/2012
1.  (2008) Click Start. Navigate to All Programs > Accessories > System Tools > Task Scheduler.  
    (2012) Open Server Manager > Tools > Task Scheduler.
2. Click Create Basic Task.



3. Name your task and give it description.
4. Click Next. Define the schedule you would like the reboot to occur.



5. Click Next. Specify what time the reboot should occur.
6. Click Next. The action should be Start a program.
7. Click Next. Click Browse and locate shutdown.exe. c:\windows\system32\shutdown.exe
8. Add the following in the argument field (replacing servername with your server) /r /f /m \\servername /t 60 /d p:4:1 /c "name of scheduled task"
9. Click Next. Check Open the Properties dialog. Click Finish.



10. Change radio button from *Run Only when user is logged in* to Run whether user is logged in or not.
11. Check the box for Run with the highest privileges.



12. Click Ok. Enter the information for the user that would have the permissions to reboot the server.

Explanation of arguments in this command.
/r - Will Reboot the Server
/f - Force running applications to close without forewarning users.
/m - Is only used with \\servername and is handy if you want to reboot a remote server.
/t - Set the time-out period before shutdown to xxx seconds.
/d - Provides the reason for a shutdown. p:4:1 Will write to the event viewer "Application: Maintenance (Planned)"
/c -  Comment on the reason for the restart or shutdown.


http://help.maximumasp.com/KB/a495/creating-a-reboot-schedule-for-your-dedicated-server.aspx

Author: Angelo A Vitale
Last update: 2018-12-12 14:37


Step-By-Step: Removing A Domain Controller Server Manually

The proper way to remove a DC server in an Active Directory infrastructure is to run DCPROMO and remove it. The following video provides an example of these steps:

There are certain situations however, such as server crash or failure of DCPROMO option, that would require a manual removal of the DC from the system by cleaning up the servers metadata as detailed in the following steps:

Step 1: Cleaning up metadata via Active Directory Users and Computers

  1. Log in to DC server as Domain/Enterprise administrator and navigate to Server Manager > Tools > Active Directory Users and Computers
  2. Expand the Domain > Domain Controllersmeta1
  3. Right click on the DC server that need to remove manually and click deletemeta2
  4. In next dialog box, click yes to confirmmeta3
  5. In next dialog box, select This Domain Controller is permanently offline and can no longer be demoted using the Active Directory Domain Services Installation Wizard (DCPROMO) and click Deletemeta4
  6. If the domain controller is global catalog server, in next window click yes to continue with deletion
  7. If the domain controller holds any FSMO roles in next window, click ok to move them to the domain controller which is available

Step 2: Cleaning up the DC server instance from the Active Directory Sites and Services

  1. Go to Server manager > Tools > Active Directory Sites and Services
  2. Expand the Sites and go to the server which need to remove
  3. Right click and click Deletemeta5
  4. In next window click yes to confirmmeta6

Step 3: Clean up metadata using ntdsutil

NOTE: Windows Server 2003 or earlier used ntdsutil and was bit of challenge but this was later simplified

  1. Right Click on Start > Command Prompt (admin)
  2. Type ntdsutil and entermeta7
  3. Then metadata cleanupmeta8
  4. Next type remove selected server <servername>, replace <servername> with DC server to removemeta9
  5. In warning window click yes to proceed
  6. Execute quit command twice
    https://blogs.technet.microsoft.com/canitpro/2016/02/17/step-by-step-removing-a-domain-controller-server-manually/

Author: Angelo A Vitale
Last update: 2018-12-17 11:53


Step-By-Step: Downgrading A Windows Server Domain and Forest Functional Level

Once upon a time, it was not possible to downgrade Windows Server forest and domain functional levels once upgraded. Enter Windows Server 2012 R2 and it's Active Directory enhancements, as detailed by the video below, backed by PowerShell automation capabilities. This enablement makes the forest and domain functional level downgrade even easier. Do keep in mind however that the lowest functional level that can be utilized is Windows Server 2008 R2.

In this Step-By-Step, we will be using a domain controller with forest and domain function level set to windows 2012 R2. PowerShell will be utilized as there is no GUI to perform this downgrade.

Lets get started.

  1. Log o to the domain controller as domain admin / Enterprise admin.
     
  2. Run PowerShell with Admin rights.
     
     
    down-1
     
  3. In PowerShell type Import-Module -Name ActiveDirectory To import the AD module. 

    down-2
     

  4. Confirm the forest and domain function levels by typing the following in PowerShell: 
     
    (Get-ADForest).ForestMode
     
    (Get-ADForest).DomainMode
     
     
  5. Next to set the forest function level to windows server 2008, in PowerShell type:
     
    Set-ADForestMode –Identity “CANITPRO.com” –ForestMode Windows2008Forest
     
    NOTE: 
    In this example the FQDN is CANITPRO.com which can be replaced with your desired domain name. 
     
     
  6. Enter Y to confirm the change
     

    down-5
     

  7. Next to downgrade the domain function level to windows server 2008, in PowerShell type:
     
    Set-ADDomainMode –Identity “CANITPRO.com” –DomainMode Windows2008Domain
     
    down-6
     
  8. Confirm the new forest and domain function levels have been downgraded by typing the following in PowerShell: 
     
    (Get-ADForest).ForestMode
     
    (Get-ADForest).DomainMode
     
    down-7
    https://blogs.technet.microsoft.com/canitpro/2016/01/20/step-by-step-downgrading-a-windows-server-domain-and-forest-functional-level/

Author: Angelo A Vitale
Last update: 2018-12-17 12:10


Step-By-Step: Allowing or Preventing Domain Users From Joining Workstations to the Domain


By default, an Active Directory domain environment allows any authenticated domain user the ability to add workstations to said domain 10 times. With that being said, there may come a time and organization may require to increase or decrease this limit.  An example of this would be an authenticated user bringing their personal Surface Pro into the office. Unless there is a block in place via NPS (network policy server) or network level port protection is enabled, the user easily connects the personal device to the domain and could become a threat to the organization down the road. 

Based on this scenario, the following post will run through the steps on editing the amount of device that can be connected or will be blocked all together. This demo uses a Windows Server 2012 R2 domain controller, however similar steps can be used for in a Windows Server 2008 environment as well.

Note– This limit is do not apply for any user account which is a member of domain admins or enterprise admins group.

  1. Log in to the DC server as domain admin or enterprise admin
     
  2. Go to Server Manager > Tools > ADSI Edit
     
    limit1
     
  3. In console expand default naming context and select the correct domain
     
    Note: in forest there can be different domains based on the configuration
     
    limit2
     
  4. Then right click on it and select properties 
     
    limit3
     
  5. Once list is open find the attribute called ms-DS-MachineAccountQuota. This is the attribute responsible for above limit. By default its set to 10. If set it to 0 it will disable this limit and otherwise the value can adjust based on the requirements. 
     
    limit4
      
  6. Once done click on ok until you exit from the popup window.
    https://blogs.technet.microsoft.com/canitpro/2015/05/26/step-by-step-allowing-or-preventing-domain-users-from-joining-workstations-to-the-domain/

Author: Angelo A Vitale
Last update: 2018-12-17 12:57


How to Upgrade Windows Server 2016 Evaluation to Full Version

If you have installed Windows Server 2016 StandardEvaluation or DatacenterEvaluation (you can download it hereafter signup) to try the features of the new version of MSFT server platform, you have 180 days to test it. During this period, all features of Windows Server 2016 are available to you. After the trial period is over, the system starts to ask for activation and turns off every hour. The notification Windows License is expired is shown on your desktop. If you have found nothing better to do than to run productive tasks on the evaluation Windows Server 2016 version and want to upgrade it to full Windows Server edition, while keeping your data and without any need to completely reinstall the system, this article is for you.

If you try to specify the KMS key or the Retail/MAK activation key for the RTM version in the Evaluation edition, the following warning appears: “This edition cannot be upgraded”. But not everything is so sad.

Windows Server 2016: This edition cannot be upgraded

Let’s make sure that you are using the evaluation edition. Start the command prompt with the administrator privileges and run the following command:

DISM /online /Get-CurrentEdition

DISM /online /Get-CurrentEdition - ServerStandardEval

Get the list of editions you can convert your current Eval edition to:

DISM /online /Get-TargetEditions

DISM /online /Get-TargetEditions

As you can see, now we have ServerStandardEval edition, and it can be upgraded to the following Windows Server 2016 editions: ServerDatacenter or ServerStandard.

Using the public KMS key for Windows Server 2016, upgrade your Eval edition to Retail version of Windows Ser