This file contains notes for installing Apache plus PHP. It includes notes on all of the relevant pieces that are included with PHP. It also includes notes on installing Firefox, Webalizer, htDig, Coppermine and Wordpress.
Before proceeding, if your system doesn't already have an Apache userid and group, you should define them in the following manner (options for RedHat shown):
/usr/sbin/groupadd -g 48 -r apache /usr/sbin/useradd -c Apache -d /var/www -g apache -M -n -r -s /sbin/nologin -u 48 apache
Download the latest Apache server (httpd) tar file from http://httpd.apache.org/ and untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvzf httpd-m.n.xx.tar.gz
Now, make sure that you have libtool and the libtool development tools installed. You can use the typical package version that comes with your OS but make sure the development package is also installed (e.g. libtool-devel). If it is not, you'll get a build error.
Also, make sure that you have pcre and the pcre development tools installed. You can use the typical package version that comes with your OS but make sure the development package is also installed (e.g. pcre-devel). Failure to install it will get you a build error.
It will create a new directory for that version of Apache. Switch to that directory and build Apache:
cd httpd-m.n.xx ./configure --prefix=/usr/share/httpd-m.n --enable-proxy --enable-rewrite make
The install directory is given by "--prefix". It should probably include the two high-level version numbers so that previous versions of httpd can be kept for posterity. The "--enable-proxy" flag allows httpd to pass requests through to another httpd on another server.
Switch to super-duper user and install Apache:
su make install
Note that, if you are building Apache 2.4.x or later, there is an added level of b.s. that you may need to go through, before you can build Apache. It depends on what is automatically installed by your version of the OS but, if you experience the error message:
configure: error: APR not found. Please read the documentation.
You first need to install APR into the build tree before you can continue. Surf over to apr.apache.org/download.cgi and get the latest versions of the portable library and the utilities in the top-level directory:
apr-a.b.xx.tar.bz2 apr-util-c.d.xx.tar.bz2
Then, unpack and build the files like this:
tar -xvjf apr-a.b.xx.tar.bz2 cd apr-a.b.xx ./configure make su make install <Ctrl-D> cd .. tar -xvjf apr-util-c.d.xx.tar.bz2 cd apr-util-c.d.xx ./configure with-apr=../apr-a.b.xx make su make install
Now, move to the httpd build directory and build Apache like this:
cd httpd-m.n.xx ./configure --prefix=/usr/share/httpd-m.n --enable-proxy --enable-rewrite make
Note that, if you unpack a different httpd source file or get later versions of apr or apr-util, you must redo the symlinks.
Coppermine as well as various other applications require that the GD library be included in PHP.
Under Unix, the normal PHP build option is --with-gd, which will cause the built-in version of GD to be used. However, the later versions of this library support both png and gif files so download the tar file from http://www.libgd.org/ and untar it in the top level source directory (e.g. /rpm/GD):
tar -xvzf gd-a.b.yy.tar.gz
Check out the README.TXT file for a list of prerequisites for the GD library. If you don't already have them, download and install them (at least zlib, libpng, freetype and the jpeg library are recommended).
./configure --prefix=/usr --shared 2) libpng, available from http://www.libpng.org/pub/png/ Portable Network Graphics library; requires zlib ./configure --prefix=/usr 3) FreeType 2.x, available from http://www.freetype.org/ Free, high-quality, and portable font engine 4) JPEG library, available from http://www.ijg.org/ Portable JPEG compression/decompression library ./configure --prefix=/usr --enable-shared --enable-static
All of these libraries are built with the standard "./configure", "make", "make install" sequence, as shown below. To build and install them in the system's library directory (i.e. /usr/lib), you should use the configure command shown, as follows:
./configure ... make make install
For the JPEG library, you may need to create the /usr/man directory structure on some systems to hold the man pages. The directories required are "man1", "man5" and "man8". Their permissions should be set the same as the standard man directories (e.g. /usr/share/man/manx).
If you do not have a version of autoconf later than 2.54, you will need to get the latest version from GNU (http://ftp.gnu.org/gnu/autoconf/) and install it with the usual "./configure", "make", "make install" sequence.
Once you have all of these prerequisites installed, switch to the newly created directory for your version of GD and build libgd:
cd gd-a.b.yy ./configure --prefix=/usr make
Switch to super-duper user and install libgd:
su make install
Under Windoze, GD is included in the PHP distribution but it is usually not enabled. Find the php_gd2.dll in your distribution directory and copy it to either your extensions directory, the bin directory or the Apache bin directory.
Some later versions of PHP require a newer version of libxml2 than comes on the earlier versions of RedHat. On the other hand, earlier versions seem to be OK without it. Furthermore, you never know when the PHP winkies will fix this problem so you might just want to try it without, first.
If you do decide that you need libxml2, download the tar file from http://xmlsoft.org/downloads.html and untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvzf libxml2-a.b.yy.tar.gz
It will create a new directory for that version of libxml2. Switch to that directory and build libxml2. Note that the Python portion of libxml2 is likely to cause the build to crap out so, unless you really need Python support, you should disable it:
cd libxml2-a.b.yy ./configure --prefix=/usr/local --without-python make
The fact that Python is broken probably precludes overwriting the old libxml2 in /usr/lib, hence the reason for installing it in /usr/local/lib. This will allow libxml2 to be built but won't wipe out other things that use Python (like GNOME).
Switch to super-duper user and install libxml2:
su make install
If you will be accessing MySQL directly through PHP, you should install it first, according to the instructions in InstallNotes-Database.txt. Once it is installed, grant the apache user permission to show the list of databases. This will allow basic status information to be returned to PHP. If any other priviledges are required, they can be granted in a similar manner:
mysql -u root -psecretpw grant show databases on *.* to 'apache'@'localhost';
If you will be using ODBC to make any connections to databases, you should install UnixODBC (our preferred choice for the ODBC driver) now, per the instructions in InstallNotes-Database.txt. Any currently installed version of unixODBC should work with the latest Apache but having the latest version will ensure that all known bugs are fixed. Whatever you decide, you will need to know the name of the UnixODBC install directory during the PHP install step, if you want to use ODBC.
If you want to use GnuTLS (below) for TLS and SSL support, you will first need to install libgcrypt. If you cannot load libgcrypt and its development package from RPMs via your OS package installer (or if the version of libgcrypt that it installs is not new enough for GnuTLS liking), download the tar file from ftp://ftp.gnupg.org/gcrypt/libgcrypt and untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvzf libgcrypt-a.b.yy.tar.gz
It will create a new directory for that version of libgcrypt. Switch to that directory and build libgcrypt:
cd libgcrypt-a.b.yy ./configure --prefix=/usr/local make
Since it is likely that the package manager of your OS will have installed a version of libgcrypt that is being used by other stuff in the system, we have chosen to install the new version that we build in /usr/local (hence the need for "--prefix"). However, in many versions of the libgcrypt configure script, the default installation location is /usr/local so you may not need to specify this parameter.
Switch to super-duper user and install libgcrypt:
su make install
The standard mod_ssl support that is built into Apache does not allow more than one certificate to be used and does not support virtual hosts. If you wish to have more than one certificate or support virtual hosts with HTTPS, you will instead need to install GnuTLS and mod_gnutls for TLS and SSL support. The prerequisite for this is libgcrypt (above).
Once you have either loaded libgcrypt and its development package from RPMs via your OS package installer, or downloaded and built the libgcrypt tar file, you should obtain the GnuTLS package from one of the mirror sites found at http://www.gnu.org/software/gnutls/download.html. Untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvjf gnutls-a.b.yy.tar.bz2
It will create a new directory for that version of gnutls. Switch to that directory and build gnutls:
cd gnutls-a.b.yy ./configure --with-libgcrypt-prefix=/usr/local make
Here we show the "--with-libgcrypt-prefix" parameter being used to point the build to a version of libgcrypt that was built by you (in the step above). If you are using the standard system installed version of libgcrypt, you can probably omit this parameter.
Note that there appears to be a bug in the latest version of GnuTLS. The module lib/mac-libgcrypt.c tries to invoke gcry_md_open with a parameter of GCRY_MD_SHA224, at around line 123, however the GCRY_MD_SHA224 flag is not defined by gcrypt.h nor is the SHA-224 algorithm supported by libgcrypt. The solution, if you get a compile error is to comment out the lines in error:
/*ew case GNUTLS_DIG_SHA224: err = gcry_md_open ((gcry_md_hd_t *) ctx, GCRY_MD_SHA224, flags); break; */
Switch to super-duper user and install gnutls:
su make install
Next, obtain the latest mod_gnutls package from:
http://www.outoforder.cc/projects/apache/mod_gnutls/
Again, untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvjf mod_gnutls-a.b.yy.tar.gz
It will create a new directory for that version of mod_gnutls. Switch to that directory and build mod_gnutls. Note that the configure script is dain bramaged and does not work, despite setting --with-libgnutls to "/usr/local". To fix this problem, you must also set LD_LIBRARY_PATH:
cd php-a.b.yy LD_LIBRARY_PATH=/usr/local/lib export LD_LIBRARY_PATH ./configure --with-apxs=/usr/share/httpd-m.n/bin/apxs --with-libgnutls=/usr/local make
Switch to super-duper user and install mod_gnutls:
su make install
Note that you will have to load mod_gnutls as a DSO in the Apache config file (see below) and you will have to set LD_LIBRARY_PATH before you run Apache (ususally in the httpd script in /etc/init.d -- see below), if you built newer libgcrypt and/or libgnutls libraries into a non-standard load library path (e.g. /usr/local/lib).
The PHP modules are essentially part of Apache and must be built into any newly installed or rebuilt version of Apache, whenever it is built. Download the latest PHP tar file from http://www.php.net/downloads.php and untar it in the top level source directory (e.g. /rpm/Apache):
tar -xvzf php-a.b.yy.tar.gz
Nota bene: If you are going to be running PHPMyTicket, it does not work with version 5 of PHP. Consequently, you will need a 4.x version of PHP instead, to make this package work. Unfortunately, support for PHP 4 ceased as of 2007 Dec 31 so this may no longer be a viable option for you.
Tar will create a new directory for that version of PHP. Switch to that directory and build PHP:
cd php-a.b.yy ./configure --with-apxs2=/usr/share/httpd-m.n/bin/apxs --with-gd \ --with-ldap --with-libxml-dir=/usr/local --enable-bcmath \ --with-mcrypt --with-mysql=/usr/local/mysql \ --with-unixODBC=/usr/local/unixODBC make
The install directory is the same as the Apache install directory chosen above and is given by "--with-apxs2". The other, architecture-independant components of PHP will be installed in /usr/local unless "--prefix=" is used. Basically, what this means is that they will be installed in /usr/local/lib/php, which is usually a good choice so there is no need to change the prefix. The other options turn on or off features that are used. The "--with-unixODBC" option must point to the directory where unixODBC was installed. The "--with-gd" option may need to point to "/usr/local" (i.e. "--with-gd=/usr/local") if GD was installed there instead of "/usr".
Switch to super-duper user and install PHP:
su make install
You may also want to add this symlink, since many programs expect to find PHP at this location:
ln -s /usr/local/bin/php /usr/bin/php
Note that, if you make any changes to configuration parameters passed to "configure", you should do a "make clean" before running "configure" again and compiling any of these components. Otherwise, your parameters might not take effect.
There often comes a time when you need to check which modules are installed in PHP, see how it is configured, etc. The following PHP script can prove useful:
phpinfo.php:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>PHP Info</title> </head> <body> <script language="php"> phpinfo(); </script> </body> </html>
If you install this script in one of your HTML directories, you can enter its URL in your browser to get a nicely formatted display of all of the PHP information. Note that you may not want to put this script in a top-level directory where the world's bad guys can easily find it by probing because it may reveal information that can assist them in cracking into your system. It may be wise to either give the script a name that's non-obvious or put it in some secret directory that only you know about.
If your Unix/Linux system uses CUPS for printing, PHP applications can use it to spool print jobs to your printers, if the CUPS extensions are installed into PHP.
If CUPS is already installed before you install PHP, you will need to go to the CUPS build directory and look for phpcups.so. It is usually found in the ../scripting/php subdirectory. Copy this module to the PHP extensions directory and edit the PHP config file (typically /etc/php.ini) to add phpcups.so as an extension. For example:
extension=/usr/local/lib/php/modules/phpcups.so
Note that you probably will have to create the extensions directory. If another PHP extension is already installed, this directory may already exist and be named something like ../extensions/no-debug-non-zts-20050922 or ../modules in the PHP architecture-independant install directory (e.g. /usr/local/lib/php). However, if it doesn't exist, you should create one (there is no need for the "no-debug-non-zts-20050922" part of the name) and copy phpcups.so into it:
mkdir /usr/local/lib/php/extensions chown root:root /usr/local/lib/php/extensions chmod u=rwx,go=rx /usr/local/lib/php/extensions cp phpcups.so /usr/local/lib/php/extensions chown root:root /usr/local/lib/php/extensions/phpcups.so chmod u=rwx,go=rx /usr/local/lib/php/extensions/phpcups.so
or, alternately:
mkdir /usr/local/lib/php/modules chown root:root /usr/local/lib/php/modules chmod u=rwx,go=rx /usr/local/lib/php/modules cp phpcups.so /usr/local/lib/php/modules chown root:root /usr/local/lib/php/modules/phpcups.so chmod u=rwx,go=rx /usr/local/lib/php/modules/phpcups.so
If you install CUPS after PHP is installed, the CUPS install will copy phpcups.so to the PHP extensions or modules directory for you. And, later versions of Linux (e.g. RedHat 5.x or CentOS 5.x) may have their CUPS RPMs install this module into /usr/lib/php/modules.
Note that, if you upgrade PHP to a later version that changes the API version number (e.g. from 20060613 to 20090626), you may see messages like this (from the command line version):
PHP Warning: PHP Startup: phpcups: Unable to initialize module Module compiled with module API=20060613 PHP compiled with module API=20090626
Or, PHP may just leave the phpcups module out (you can check this by running the phpinfo.php script, described above and looking for the phpcups extension) of your installation.
In this case, the simple fix is to proceed to the CUPS installation directory and run:
./configure rm -f scripting/php/phpcups.o rm -f scripting/php/phpcups.so make
Once you've done that, copy the newly built phpcups.so to the PHP extensions directory:
su cp scripting/php/phpcups.so /usr/local/lib/php/extensions
or maybe:
su cp scripting/php/phpcups.so /usr/lib/php/modules
After you install phpcups.so (or reinstall it) and use CUPS to define printers on the Web server that runs your application, the printers will appear in the list of printers enumerated by the PHP CUPS extension functions and can be used accordingly in your PHP application.
Incidentally, if you are running an OS that is installed through a package manager, using packages such as RPMs (what OS isn't, these days), you may find yourself stuck for a solution.
With at least one OS that we know of, some genius in the packaging department decided that, because a product has a "Print" button, it needs to DEPEND on CUPS. The result? If you try to uninstall CUPS, these idiotic dependancies mean that practically every package that you could ever think of needs to get unistalled. E.g. sendmail. Or all of the Gnome UI. Who gives darn? If you can't print something, it doesn't break the entire functionality of the app. Just get on with the unistall.
In order to get a working phpcups.so that can be used by PHP, we need to rebuild the module against the new Apache interface. But, since we installed a different Apache, other than the one that the OS comes with, we can't just unistall CUPS (because this would break almost everything) and build a new one. So, we need to find the version of CUPS that was chosen for the OS (probably, about 3 years old), from the CUPS archives (http://ftp.easysw.com/pub/cups/), and build it. The build will create a phpcups.so that matches the Apache interface. You can then copy that into the PHP modules directory.
If you are building against an alternate version of PHP, than the one that is installed on your OS, you can point the CUPS build at the atlernate something like this:
./configure --with-php=/usr/local/include/php
Note, that, if you build against a later version of PHP (one that doesn't include the php3_compat.h header file in the main directory), you may get a compile error in phpcups.c that says:
phpcups.c:43: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'phpcups_functions'
This is because the phpcups.c module uses the obsolete "function_entry" attribute for the list of phpcups functions, instead of the currently-used attribute "zend_function_entry". Until recently, the php3_compat.h file defined "function_entry" as "zend_function_entry". Then, the header file went away and the phpcups.c compile became broken.
The fix is to edit the phpcups.c source file and change line 43 to read:
zend_function_entry phpcups_functions[] =
Then, rebuild phpcups.c via make and go back to the part above about reinstalling phpcups.so.
Under Windoze, the printing extensions provided by PECL (http://pecl4win.php.net/index.php) can be installed into PHP to support printing to printers connected to or usable by the Web server. If the printing extensions are installed, the printers defined on the Web server will appear in the list of printers enumerated by the PHP printing extension functions.
The extension packages themselves are available at: http://snaps.php.net/.
If your version of PHP does not already include the extensions in the extensions/ or ext/ directory, select the appropriate PECL snaps repository for your version of PHP and obtain this unbundled PECL extension from it. Either put it in the extensions directory, the bin directory or the Apache bin directory.
Assuming that you have a working httpd, copy the previous httpd configuration file as follows:
cp /etc/httpd/conf/httpd-a.b.xx.conf httpd.m.n.xx.conf
Change all of the references to the old httpd directories to the new ones. For example, change all "2.0" references to "2.2".
If a new config file must be created, follow the example file in the install directory.
To enable PHP, you must (depending on which version of PHP you are using) load the DSO module:
LoadModule php4_module /usr/share/httpd-m.n/modules/libphp4.so
or
LoadModule php5_module /usr/share/httpd-m.n/modules/libphp5.so
Then, you need to tell Apache that it should pass PHP modules off to the PHP interpreter:
AddType application/x-httpd-php .php
Finally, you should add the PHP index file to the list of index files, something like this:
DirectoryIndex index.html index.html.var index.php
If you wish to use GnuTLS for SSL and TLS connections, you need to load the mod_gnutls DSO somewhere in the config file (probably with all of the other DSOs):
LoadModule gnutls_module modules/mod_gnutls.so
For the main host or each of the virtual hosts, you then need to turn GnuTLS on and tell it which certificate and key to use:
GnuTLSEnable on GnuTLSCertificateFile /etc/httpd/hosta.com.crt GnuTLSKeyFile /etc/httpd/hosta.com.key
You can use multiple certificates, even going so far as to have one for each virtual host:
GnuTLSCertificateFile /etc/httpd/hostb.com.crt GnuTLSKeyFile /etc/httpd/hostb.com.key
See below for how to build your own certificates.
Meanwhile, if you're into virtual hosts for more than one Web site on the server, here is a sample of that portion of the configuration file needed to set up a virtual host that listens on 9280:
## ## ABCCo Test Site Virtual Host Context ## Listen 9280 <VirtualHost default:9280> # # Document root directory for ABCCo html. # DocumentRoot "/var/www/ABCCo/html" <Directory "/var/www/ABCCo/html"> Options +Includes </Directory> # # Directories defined in the main server that we don't want people to see # under this port. # Alias /manual "/var/www/ABCCo/limbo" Alias /doc "/var/www/ABCCo/limbo" # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/var/www/ABCCo/cgi-bin/" # # Define the properties of the directory above. # <Directory "/var/www/ABCCo/cgi-bin"> AllowOverride None Options ExecCGI FollowSymLinks Order allow,deny Allow from all </Directory> # # Point the PHP include path at the HTML directory top level. This lets us # include stuff without worrying about where we are running from. # php_value include_path '.:/var/www/ABCCo/html:/usr/local/lib/php' </VirtualHost>
Note the part about pointing the PHP include path at the top level HTML directory. It fixes a serious oversight on the part of PHP, in my opinion.
Also, pay particular attention to the aliases for directories that are defined in the main server (i.e. port 80) that you don't want to be visible to the virtual server. If you don't specifically point them to limbo, as shown in the above example, the users of the virtual host will be able to see the directories defined for the main server (perhaps not what you intended). If you are using the default Apache installation, here are some examples of aliases that you might want to disallow:
ScriptAlias /cgi-bin/ "/var/www/ABCCo/limbo/" Alias /doc "/var/www/ABCCo/limbo" Alias /error "/var/www/ABCCo/limbo" Alias /icons "/var/www/ABCCo/limbo" Alias /manual "/var/www/ABCCo/limbo"
We presume that you'll always define DocumentRoot which will override where the main server's DocumentRoot points but be very careful if you don't. Any user of the virtual server will see the main server's DocumentRoot and all of its contents.
If you are planning on using ProxyPass to redirect requests to another server, be aware of a serious security breach that is possible with the mod_proxy module. If forward proxies are turned on, anybody who has access to your httpd server can use it to forward proxy requests to anywhere the server can reach. If the server is connected to the outside world, you will soon be getting a million hits an hour from bad guys using your server to anonymously download all sorts of crap-oh-la through the proxy.
Consequently, unless you have the server properly secured and/or the forward proxy feature locked down so that only internal users may access it, make sure that forward proxying is turned off. The easiest way to do this is like so:
# Mod_proxy # If mod_proxy is turned on, disable forward proxies for everyone. This # feature is bad news. <IfModule mod_proxy.c> ProxyRequests Off </IfModule>
These lines should be placed in the general configuration section somewhere before the Listen directive so that forward proxying is turned off for all of the servers and virtual servers listening to the outside world. If you really want this feature for a particular server/virtual server, it can be enabled only for certain, well-known users/machines. But, it is best to leave it turned off for everyone unless you really know what you're doing. If you make a mistake, the bad guys will find out about it.
Note that disabling the use of forward proxies in this manner does not effect the ProxyPass directive so that you may still do the following:
<Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /Billing http://deltoids:9280/Billing
To set up name-based virtual hosting, you define a virtual server that looks something like this (here we're forwarding to another server with ProxyPass too, just to spice up the example):
<VirtualHost default:80> # # These are the domain names that we map to the proxy server. # ServerName www.mydomain.com ServerAlias www.mydomain.net ServerAlias www.mydomain.org ServerAlias mydomain.com ServerAlias mydomain.net ServerAlias mydomain.org # # Proxy directives for the Web site. Redirected to another server. # <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://10.100.0.1:8280/ ProxyPassReverse / http://10.100.0.1:8280/ </VirtualHost>
Or, if you want to set up name-based virtual hosting along with SSL, for that secure computing feeling, you might define virtual servers that look something like this (here we're forwarding to two separate servers with ProxyPass too, just to spice up the example):
Listen *:443 NameVirtualHost *:443 <VirtualHost *:443> # # To make virtual hosts work, we use mod_gnutls instead of SSL. # GnuTLSEnable on GnuTLSCertificateFile /etc/httpd/hosta.com.crt GnuTLSKeyFile /etc/httpd/hosta.com.key # # These are the domain names that we map to the proxy server. # ServerName www.mydomain.com ServerAlias mydomain.com # # Proxy directives for the Web site. Redirected to another server. # <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://10.100.0.1:8280/ ProxyPassReverse / http://10.100.0.1:8280/ </VirtualHost> <VirtualHost *:443> # # Certificates have the domain name in them so we need a separate one for # alternate domain names. # GnuTLSEnable on GnuTLSCertificateFile /etc/httpd/hostb.com.crt GnuTLSKeyFile /etc/httpd/hostb.com.key # # These are the alternate domain names that we map to the proxy server. # ServerName www.mydomain.net ServerAlias mydomain.net # # Proxy directives for the Web site. Redirected to another server. # <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://10.100.0.1:8380/ ProxyPassReverse / http://10.100.0.1:8380/ </VirtualHost>
Once you have the new config file built to your satisfaction, unlink the config file symbolic link and relink it to the new config file:
rm /etc/httpd/conf/httpd.conf ln -s /etc/httpd/conf/httpd-m.n.xx.conf /etc/httpd/conf/httpd.conf
Also, unlink the symbolic link in the document root directory that points to the Apache documentation and link it to the new documentation:
rm /var/www/manual ln -s /usr/share/httpd-m.n/manual /var/www/manual
One final note. The newer versions of Apache no longer look in /etc/httpd/conf for httpd.conf anymore. Instead, they look in their installation directory (e.g. /usr/share/httpd-m.n/conf/httpd.conf). If you don't like this "feature", you may want to replace /usr/share/httpd-m.n/conf/httpd.conf with a symlink to /etc/httpd/conf/httpd.conf:
rm -f /usr/share/httpd-m.n/conf/httpd.conf ln -s /etc/httpd/conf/httpd.conf /usr/share/httpd-m.n/conf/httpd.conf
Otherwise, you will have to specifically direct httpd to the correct config file, if you are using the one in /etc. To do this, use the "-f" parameter when starting httpd:
/usr/share/httpd-m.n/bin/httpd -f /etc/httpd/conf/httpd.conf
If you need SSL certificates for your SSL-enabled Web sites, you can either obtain them from a real Certificate Authority (like Network Solutions or Bob Parsons) or create them yourself, using the tools installed with OpenSSL.
A good place to put the certificates and keys is off the Apache directory in /etc. Begin by creating the new directory and then change to it for the rest of the steps herein:
su mkdir /etc/httpd/ssl cd /etc/httpd/ssl
Next, create a public/private key. Note that many Certificate Authorities require at least a 2048-bit key these days. For your own use, you can use any key length you like (although 4096 is a good choice) but if you'll be sending your CSR to a real CA, you should check what key length they require or use at least 2048 by default. Create the key like this:
openssl genrsa -out website.com.key 4096
You can check that the key was generated OK (or list a key at any time) like this:
openssl rsa -text -in website.com.key
Note that, if you lose the generated key later on, you are screwed when it comes to recreating your certificate. So, save it in a safe place (i.e. elsewhere from /etc/httpd/ssl).
Also note that your private encryption key is contained within the generated file so make sure that it is properly secured. Do not make it generally readable. Do not send it anywhere via an insecure channel such as email. If this file should fall into the wrong hands, it would allow the bad guys to encrypt and sign things as you with impunity. If they also got ahold of your certificate, they could masquerade as you as well. And, as far as we know, there's no such thing as certificate revocation that actually works so they'll be doing it for the life of any of your certs. There will be nothing you can do about it, short of getting a new domain name and convincing all of your users to switch. There's nothing wrong with being a bit paranoid.
If you are renewing an expiring certificate and you would like to list it, so that you can make sure to use the same values for the new certificate, you can do so like this:
openssl x509 -text -in website.com.crt
Now, either using the original values or the values described below, generate a certificate signing request:
openssl req -new -key website.com.key -out website.com.csr
This command will ask you to enter information that will be incorporated into your certificate request. The information that you enter is used to create what is called a Distinguished Name or a DN. There are a bunch of fields that are to be filled in but some can be left blank (as illustrated below). However, note that it is very important to use the exact, fully-qualified name of the server that will be using the certificate, as it is known to DNS, if you'll be using the certificate for SSL communications. Here is an example:
Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:Taxachusetts Locality Name (eg, city) [Newbury]:Snorewood Organization Name (eg, company) [My Company Ltd]:Bozo Development Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:website.com Email Address []: A challenge password []: An optional company name []:
You can check the generated certificate request like this, if you wish:
openssl req -text -in website.com.csr
At this point, you can take one of two steps. If you want a real certificate (or just wish to help out Network Solutions or Bob Parsons with their boat payments), you should submit the CSR to your favorite Certificate Authority. Usually, this is done by pasting the contents of the ".csr" file into a Web page or email message. Here's an example of what to paste:
-----BEGIN CERTIFICATE REQUEST----- MIIBsjCCARsCAQAwcjELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDU1hc3NhY2h1c2V0 dHMxE9AOBgNVBAcTB05vcndvb2QxGDAWBgNVBAoTD0JTTSBEZXZlbG9wbWVudDEf MB0GA1UEAxMWYXJsaW5ndG9udHJlYXN1cmVyLmNvbTCBnzANBgkqhkiG9w0BAQEF AAOBj2AwgYkCgYEAwiQAh8GpBPPKT4JJHWPd4ezwXYXT/XFIK6vGp0Vx4VzeX6l4 Eln5kek2nETsCgtEwnYTx8vBOf8aDCfrFPUhh9fXow2CtTeii7j1D/zK8TltVw8d NQqgPLku1Mtev1e2rgpuYi/ca981W1JcDAmfx5IMiMH4yhEXwgjBbf3ZVdUCAwEA AaAAMA0GCSqGSIb3PQEBBQUAA4GBALRhnvIOVP8pI/cmBcNSJ5vrCdoaelXbC+tp /mx842exczHkRPrNWallps4nplThtYWq1P9a2Lia1dncwx2fcdWeEZ8pW6PJZaJn 1J7TpOcSdUeFkWkg8uw/HpU3c3nUI8gk8LZ5sLDtNMNoxp96kohGoonOw933DJPy P9ogWyL3 -----END CERTIFICATE REQUEST-----
After the Certificate Authority collects their vig and processes your request, they will send you back a cert (probably in an email message) that will look something like this:
-----BEGIN CERTIFICATE----- MIIBsjCCARsCAQAwcjELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDU1hc3NhY2h1c2V0 dHMxE9AOBgNVBAcTB05vcndvb2QxGDAWBgNVBAoTD0JTTSBEZXZlbG9wbWVudDEf MB0GA1UEAxMWYXJsaW5ndG9udHJlYXN1cmVyLmNvbTCBnzANBgkqhkiG9w0BAQEF AAOBj2AwgYkCgYEAwiQAh8GpBPPKT4JJHWPd4ezwXYXT/XFIK6vGp0Vx4VzeX6l4 Eln5kek2nETsCgtEwnYTx8vBOf8aDCfrFPUhh9fXow2CtTeii7j1D/zK8TltVw8d NQqgPLku1Mtev1e2rgpuYi/ca981W1JcDAmfx5IMiMH4yhEXwgjBbf3ZVdUCAwEA AaAAMA0GCSqGSIb3PQEBBQUAA4GBALRhnvIOVP8pI/cmBcNSJ5vrCdoaelXbC+tp /mx842exczHkRPrNWallps4nplThtYWq1P9a2Lia1dncwx2fcdWeEZ8pW6PJZaJn 1J7TpOcSdUeFkWkg8uw/HpU3c3nUI8gk8LZ5sLDtNMNoxp96kohGoonOw933DJPy MIIBsjCCARsCAQAwcjELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDU1hc3NhY2h1c2V0 dHMxE9AOBgNVBAcTB05vcndvb2QxGDAWBgNVBAoTD0JTTSBEZXZlbG9wbWVudDEf MB0GA1UEAxMWYXJsaW5ndG9udHJlYXN1cmVyLmNvbTCBnzANBgkqhkiG9w0BAQEF AAOBj2AwgYkCgYEAwiQAh8GpBPPKT4JJHWPd4ezwXYXT/XFIK6vGp0Vx4VzeX6l4 Eln5kek2nETsCgtEwnYTx8vBOf8aDCfrFPUhh9fXow2CtTeii7j1D/zK8TltVw8d NQqgPLku1Mtev1e2rgpuYi/ca981W1JcDAmfx5IMiMH4yhEXwgjBbf3ZVdUCAwEA AaAAMA0GCSqGSIb3PQEBBQUAA4GBALRhnvIOVP8pI/cmBcNSJ5vrCdoaelXbC+tp /mx842exczHkRPrNWallps4nplThtYWq1P9a2Lia1dncwx2fcdWeEZ8pW6PJZaJn 1J7TpOcSdUeFkWkg8uw/HpU3c3nUI8gk8LZ5sLDtNMNoxp96kohGoonOw933DJPy dQ0R1xZTqy2cxnnr+A== -----END CERTIFICATE-----
Create a file (named something like website.com.crt) in the directory where you stored the key and CSR. Cut and paste only the lines shown above into the file, with your favorite text editor, and save the file. You're in biz.
Alternately, if you don't wish to help Network Solutions/Bob Parsons make their boat payments, you can sign your own certificates. The browser will whine about certificates not being signed by someone it knows about but the certificates will work just as well, none-the-less. Since the need for a signed certificate, to communicate securely, is basically b.s., you can certainly proceed in this fashion, with no reduction in security, if you and/or your users are willing to live with the whining. To do so, enter:
openssl x509 -req -days 3660 -in website.com.csr -signkey website.com.key \ -out website.com.crt
This will generate a certificate that is good for ten years (3660 days).
As was noted earlier, if you wish to list your new certificate to check its contents, you can do so like this:
openssl x509 -text -in website.com.crt
Once you're happy with your certificate, make everything safe from prying eyes:
chgrp apache * chmod o= *
Hack the Apache config file (above) to point Apache at the SSL certificate and key:
SSLCertificateFile /etc/httpd/ssl/website.com.crt SSLCertificateKeyFile /etc/httpd/ssl/website.com.key
Or, if you are using GnuTLS, do something like this (possibly for an individual virtual host):
GnuTLSEnable on GnuTLSCertificateFile /etc/httpd/ssl/website.com.crt GnuTLSKeyFile /etc/httpd/ssl/website.com.key
Note that if your certificate is certified by a CA, you'll also have to point the Web server at the certificate chain (a.k.a. bundle) file that the CA sends you along with your certificate. The certificate chain looks just like a regular certificate, except that there are usually more than one certificate in the file. It will look something like this:
-----BEGIN CERTIFICATE----- MIIE3jCCA8agAwIBAgICAwEwDQYJKoZIhvcNAQEFBQAwYzELMAkGA1UEBhMCVVMx ... qDTMBqLdElrRhjZkAzVvb3du6/KFUJheqwNTrZEjYx8WnM25sgVjOuH0aBsXBTWV U+4= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIE+zCCBGSgAwIBAgICAQ0wDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1Zh ... WBsUs5iB0QQeyAfJg594RAoYC5jcdnplDQ1tgMQLARzLrUc+cb53S8wGd9D0Vmsf SxOaFIqII6hR8INMqzW/Rn453HWkrugp++85j09VZw== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0 ... IYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwC W/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd -----END CERTIFICATE-----
Create a file (named something like cert-chain.crt or CA-bundle.crt) in the same directory where you stored everything else. Cut and paste only the lines shown above into the file, with your favorite text editor, and save the file. Then, hack the Apache config file to point Apache at the chain certificate:
SSLCertificateChainFile /etc/httpd/ssl/CA-bundle.crt
If your CA sends you individual certificates for the certificate chain, you'll have to concatenate them together in order, from their certificate up to the root certificate, to create the bundle file. Here's an example of how to do it for a Comodo-issued certificate, given that the cert chain is as follows:
website.com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt
Use cat to concatenate the certs together into the bundle:
cat COMODORSADomainValidationSecureServerCA.crt \ COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt \ >CA-bundle.crt
You can now use this bundle as described above.
Incidentally, if you'd like to verify that your certificate and bundle are correct, you can do so like this:
openssl verify -CAfile CA-bundle.crt website.com.crt
Or, if you are using GnuTLS, you'll have to concatenate all of the certificates together into a single file. Be sure that your certificate preceeds the certificate chain in the file and give the file a name something like website.com_CA-bundle.crt. If the CA sent you a bundle file (or you created one with your text editor), do it like this:
cat website.com.crt CA-bundle.crt >website.com_CA-bundle.crt
If you were given individual certificates for the certificate chain, as in the Comodo-issued certificate example above, do it like this:
cat website.com.crt COMODORSADomainValidationSecureServerCA.crt \ COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt \ >website.com_CA-Bundle.crt
Once you have all the certificates concatenated into a single file, point to it in the Apache config file like this:
GnuTLSCertificateFile /etc/httpd/ssl/website.com_CA-bundle.crt
If you ever need to port your cert to an IIS server, you will need to convert it to pkcs12 format and include the key, along with the cert and CA bundle. This can all be done with a single OpenSSL command:
openssl pkcs12 -export -out website.com.pfx -inkey website.com.key \ -in website.com.crt -certfile website.com_CA-bundle.crt
Since the resultant ".pfx" file contains your private key, along with your certificate, be sure to password protect the exported file with a strong password, especially if you are planning to send it anywhere via an insecure channel (e.g. email). If this file should fall into the wrong hands, it would allow the bad guys to masquerade as you with impunity. And, as far as we know, there's no such thing as certificate revocation that actually works so they'll be doing it for the life of the cert. There will be nothing you can do about it, short of getting a new domain name and convincing all of your users to switch.
If at any time, you need to list the contents of a pkcs12 format cert, you can do so like this:
openssl pkcs12 -in website.com.pfx -nodes
PHP, for some reason, wants to see its config file in /usr/local/lib. Not exactly where you or I would look for it. I prefer to symlink it to /etc:
ln -s /etc/php.ini /usr/local/lib/php.ini
Once you've decided where you want to put the PHP config file (and perhaps set up the symlink described above), copy the php.ini prototype file from the build directory top level. Use either the distribution or recommended files:
cp .../php.ini-dist /etc/php.ini
or
cp .../php.ini-recommended /etc/php.ini
You'll probably want to set the following options in the PHP config file:
; Under Unix, the default save path can be used, if set. Otherwise, pick ; something like: session.save_path = /tmp ; Allow short tags to be used to open PHP code. This allows the system's ; code to be much less verbose and more aescetically pleasing. short_open_tag = On
If the PHP extensions directory defined in the default config file does not point to the correct directory where you installed your extensions, aim it at your actual extensions directory:
extension_dir = /usr/local/lib/php/extensions
or
extension_dir = "c:/my/extension/dir/"
Under Windows, if you are using the graphics drawing extensions, despite the fact that they are bundled with the distribution, you will still need to include the extension. If the extension line is already in your PHP config file, just uncomment it. Otherwise, add the following:
extension=php_gd2.dll
Under Unix/Linux, if you are using the CUPS extensions, load phpcups.so as an extension:
extension=phpcups.so
You must enable php_printer.dll inside of php.ini in order to use the Windoze printing functions. If the line is already in your config file, just uncomment it. Otherwise, add the following:
extension=php_printer.dll
That's pretty much it, unless you want to set some other options after consulting the PHP documentation.
Httpd is probably running at startup so, before you do anything, stop httpd:
/etc/rc.d/init.d/httpd stop
Hack /etc/rc.d/init.d/httpd to point to the new httpd directories:
httpd="/usr/share/httpd-2.2/bin/httpd -f /etc/httpd/conf/httpd-2.2.10.conf"
If you built newer versions of DSOs that require newer libraries than the system-installed versions and the overrides are somewhere other than the standard load path (e.g. /usr/lib), you may have to set LD_LIBRARY_PATH in the startup script, just before httpd is started:
LD_LIBRARY_PATH=/usr/local/lib; export LD_LIBRARY_PATH
/etc/init.d/httpd:
Here is a complete example of the startup script:
#!/bin/bash # # Startup script for the Apache Web Server # # chkconfig: 2345 85 15 # description: Apache is a World Wide Web server. It is used to serve \ # HTML files and CGI. # processname: httpd # pidfile: /var/run/httpd.pid # config: /etc/httpd/conf/httpd.conf # Source function library and system configuration. . /etc/rc.d/init.d/functions if [ -f /etc/sysconfig/httpd ]; then . /etc/sysconfig/httpd fi # This will prevent initlog from swallowing up a pass-phrase prompt if # mod_ssl needs a pass-phrase from the user. INITLOG_ARGS="" # Path to the apachectl script, server binary, and short-form for messages. apachectl=/usr/share/httpd-2.2/bin/apachectl httpd="/usr/share/httpd-2.2/bin/httpd -f /etc/httpd/conf/httpd-2.2.10.conf" # Pid file and program name. httpdfile="/usr/sbin/httpd" prog=httpd RETVAL=0 # Check for old, 1.3 configuration files. check13 () { CONFFILE=/etc/httpd/conf/httpd.conf GONE="(ServerType|BindAddress|Port|AddModule|ClearModuleList|" GONE="${GONE}AgentLog|RefererLog|RefererIgnore|FancyIndexing|" GONE="${GONE}AccessConfig|ResourceConfig)" if grep -Eiq "^[[:space:]]*($GONE)" $CONFFILE; then echo echo 1>&2 " Apache 1.3 configuration directives found" echo 1>&2 " please read /usr/share/doc/httpd-2.0.40/migration.html" failure "Apache 1.3 config directives test" echo exit 1 fi } # The semantics of these two functions differ from the way apachectl does # things -- attempting to start while running is a failure, and shutdown # when not running is also a failure. So we just do it the way init scripts # are expected to behave here. start() { echo -n $"Starting $prog: " check13 || exit 1 LD_LIBRARY_PATH=/usr/local/lib export LD_LIBRARY_PATH daemon $httpd $OPTIONS RETVAL=$? echo [ $RETVAL = 0 ] && touch /var/lock/subsys/httpd return $RETVAL } stop() { echo -n $"Stopping $prog: " killproc $httpdfile RETVAL=$? echo [ $RETVAL = 0 ] && rm -f /var/lock/subsys/httpd /var/run/httpd.pid } reload() { echo -n $"Reloading $prog: " check13 || exit 1 killproc $httpdfile -HUP RETVAL=$? echo } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status $httpd RETVAL=$? ;; restart) stop start ;; condrestart) if [ -f /var/run/httpd.pid ] ; then stop start fi ;; reload) reload ;; graceful|help|configtest|fullstatus) $apachectl $@ RETVAL=$? ;; *) echo $"Usage: $prog {start|stop|restart|condrestart|reload|status\ |fullstatus|graceful|help|configtest}" exit 1 esac exit $RETVAL
Once you have you startup script hacked the way you'd like it, start httpd:
/etc/rc.d/init.d/httpd start
If you get a message like this:
Starting httpd: Syntax error on line 239 of /etc/httpd/conf/httpd.conf: API module structure `php5_module' in file /usr/share/httpd-m.n/modules/libphp5.so is garbled - perhaps this is not an Apache module DSO?
When you try to start Apache with PHP, it is probably due to the fact that the version of PHP that you are trying to run was not compiled against the version of httpd that you are starting.
The first thing to check is that the ./configure line used to build PHP has the proper --with-apxs2=/usr/share/httpd-m.n/bin/apxs parameter (i.e. that it points to the correct apxs file for the version of Apache that you're trying to run). If not, rebuild PHP correctly.
If PHP was built with the correct apxs file, you are probably running an older version of Apache that was left lying around on your system. Be advised that those swell folks at RedHat will install a bogus copy of httpd, probably in /usr/sbin, even if you tell them not to install the Apache rpm when you build the system. Its just a little service that they like to supply you with. If that's the case, the copy of apachectl that they also installed when you asked them not to, probably in /usr/sbin as well, is undoubtedly running the old httpd when it gets started by the httpd script, in /etc/rc.d/init.d, that wasn't supposed to be installed either.
To repair the problem caused by an older version of httpd, you can either update /etc/rc.d/init.d/httpd (if you decide to keep the older startup script) or possibly delete the old modules and symlink the old names to the new, correct ones.
Updating the httpd startup script will allow you to go back to a previous version of Apache, if you change you mind. To do this, alter the following variables, possibly as shown in this example:
apachectl=/usr/share/httpd-m.n/bin/apachectl httpd="/usr/share/httpd-m.n/bin/httpd -f /etc/httpd/conf/httpd-m.n.xx.conf"
Once you are sure that everything works, you may want to delete the older versions of httpd and apachectl to ensure that they aren't ever run by mistake:
rm -f /usr/sbin/httpd /usr/sbin/httpd.worker rm -f /usr/sbin/apachectl
Symlinking the older names to the newer names will ensure that: 1) the older versions are never run by mistake; 2) their names are kept in their original locations so that they can always be easily found. Do something like this:
rm -f /usr/sbin/httpd ln -s /usr/share/httpd-m.n/bin/httpd /usr/sbin/httpd rm -f /usr/sbin/httpd.worker (this is a bogus, threaded version, installed by our pals at RedHat) rm -f /usr/sbin/apachectl ln -s /usr/share/httpd-m.n/bin/apachectl /usr/sbin/apachectl
If you still cannot find the problem, there is a possibility that httpd was linked against different dynamic libraries that PHP. To determine whether this is the case, try the following commands:
ldd /usr/share/httpd-m.n/modules/libphp5.so ldd /usr/share/httpd-m.n/bin/httpd
Compare the output from both of these commands and look to see if either of the modules links to a different dynamic library. If it does, correct the environment variables and/or specify the linkage paths via configure so that both modules are reading from the same page. For standard system libaries, usually rebuilding both modules at the same time (i.e. with the same environment variables) will fix the problem while for the more esoteric libraries supplying the identical libary paths to both the Apache and PHP ./configure commands will fix the problem.
Not strictly a part of the Web server, you might wish to install the latest browser anyway so that you can see the Web pages being served.
Basically, you go get the appropriate binary from www.mozilla.com. Download it to your favorite install directory (e.g. /rpm/Firefox). Untar it:
tar -xvzf firefox-x.0.0.y
This will create a directory called "firefox". Copy this directory somewhere where you'd like it to run from:
mkdir /usr/local/firefox cp -R firefox/* /usr/local/firefox chown root:root -R /usr/local/firefox
Add a symlink from the standard bin directory to the Firefox startup script:
ln -s /usr/local/firefox/firefox /usr/bin/firefox
If you want to add Firefox to the GNOME desktop or bottom bar, right click on the appropriate spot and select the "Add a new launcher to this item" choice. Fill in the dialog box. The name should be "Firefox" and the comment should be something like "Web browser". The command is simply "firefox". To add an icon, browse to /usr/local/firefox/icons and pick the 50 x 50 icon (e.g. mozicon50.xpm). Note, if you don't pick an icon, you better remember where GNOME puts it because it will be invisible, otherwise. Save this new launcher and you should be in business.
Webalizer may now come pre-installed on many versions of Linux (e.g. CentOS). Unfortunately, it also appears to come pre-broken on these same versions. So, if you install it and it doesn't work, you may want to remove the RPM and try again, as described below.
If you wish, you can download the latest Webalizer tar file from http://www.mrunix.net/webalizer/download.html. Any currently installed version of Webalizer should work with the latest Apache. Furthermore, development on Webalizer is fairly static but having the latest version of Webalizer is cool so untar it in the top level source directory (e.g. /rpm/webalizer):
tar -xvzf webalizer-a.b-yy.tar.gz
It will create a new directory for that version of Webalizer. Switch to that directory and build Webalizer:
cd Webalizer-a.b.yy ./configure --enable-dns --with-dblib=ldb-4.0 make
Note that the Webalizer configure script finds db_185.h in the correct place but then screws up when it comes to finding the actual link library containing the db 185 modules. It configures the makefile to link with ldb1 instead of ldb-4.0. If you are running on a system where configure figures this out properly, the "--with-dblib" parameter may be omitted.
A Webalizer userid should be set up to run Webalizer under. No home directory is required and the userid is usually set up as a system userid. All of the typical Linux installations come with one already installed but if yours doesn't do the following:
/usr/sbin/useradd -c "Web statistics package" -d /var/www/html/usage \ -r -s /sbin/nologin -u 67 webalizer
Installing Webalizer is simple. The executable itself is copied to a system directory, a symlink is added to give it an alternate name and then a couple of png files are copied to the directory where the Web statistics will be written. Switch to super-duper user and install Webalizer:
su cp webalizer /usr/local/bin/webalizer chown webalizer:root /usr/local/bin/webalizer chmod u=rwx,go=rx /usr/local/bin/webalizer ln -s /usr/local/bin/webalizer /usr/local/bin/webazolver (if not already there) chown webalizer:root /usr/local/bin/webazolver
You can try the "make install" command (which also copies the documentation), if you wish but doing it by hand gets the permissions correct. At the very least, you probably want:
su make install chown webalizer:root /usr/local/bin/webalizer chmod u=rwx,go=rx /usr/local/bin/webalizer chown webalizer:root /usr/local/bin/webazolver
If the usage directory is not already part of the html tree, create it and copy the logos into it:
mkdir /var/www/html/usage cp webalizer.png msfree.png /var/www/html/usage chown webalizer:root -R /var/www/html/usage chmod u=rwx,go=rx -R /var/www/html/usage
Also, the directory where the history, incremental checkpoint and DNS cache are to be kept should be created, if not already done:
mkdir /var/lib/webalizer chown webalizer:root /var/lib/webalizer chmod u=rwx,go=rx /var/lib/webalizer
The sample Webalizer config file should be put in /etc/webalizer.conf. Here are some of the high spots that you might want to configure:
LogFile /var/log/httpd/access_log OutputDir /var/www/html/usage HistoryName /var/lib/webalizer/webalizer.hist Incremental yes IncrementalName /var/lib/webalizer/webalizer.current PageType htm PageType cgi PageType shtm PageType pdf PageType php PageType pl DNSCache /var/lib/webalizer/dns_cache.db DNSChildren 10 Quiet yes
Finally, you will need to add a file into the cron.daily directory to cause the webalizer to be run every day.
/etc/cron.daily/webalizer:
#! /bin/bash # Update access statistics for the local Web site(s). if [ -s /var/log/httpd/access_log ] ; then /usr/local/bin/webalizer fi exit 0
Don't forget to set its permissions as follows:
chown root:root /etc/cron.daily/webalizer chmod u=rwx,go=rx /etc/cron.daily/webalizer
Development on htDig is no longer ongoing, apparently. However, the 3.1.6 version of htDig, which is still available from http://www.htdig.org, is a classic, with good performance and few bugs, that just keeps running and running. So, if you want to install this version, download its tar ball and unzip it:
tar -xvzf htdig-3.1.6.tar.gz
Change to the build directory and configure the build process:
cd htdig-3.1.6 ./configure --prefix=/usr/local/htdig --with-cgi-bin-dir=/var/www/cgi-bin \ --with-image-dir=/var/www/html/htdig \ --with-search-dir=/usr/local/htdig/template
If you get some b.s. message about how C++ is required and you should consider installing the libstdc++ library, ignore it and run the following configure command instead:
CPPFLAGS="-Wno-deprecated" ./configure --prefix=/usr/local/htdig \ --with-cgi-bin-dir=/var/www/cgi-bin \ --with-image-dir=/var/www/html/htdig \ --with-search-dir=/usr/local/htdig/template
The problem is that the compiler winkies have decided that fstream.h and its ilk should be replaced by something "better" so they whack out a warning when you use it. The htDig guys used this header file as a proxy for C++ and, when the autoconf macro that tests for the presence of fstream.h sees the warning, it thinks the header file wasn't found. Its all b.s. We don't give a damm about what the compiler winkies think and the autoconf macro is brokey. So, don't worry about a thing.
Once you get ./configure to fly, make htDig with:
CXXFLAGS="-Wno-deprecated" make -e
Incidentally, in the same vein as fstream being broken, above, it would appear that changes to ifstream in later versions of the C library can cause htnotify to loop forever (and chew up a whole bunch of CPU cycles). In a nutshell, whoever wrote htnotify included code that can call the ifstream functions with an empty string for the filename. In earlier versions of the C library, this simply resulted in bad() returning a non-zero value, which caused htnotify to go on about its business.
However, with the new C library, bad() no longer returns a non-zero value and htnotify tries to read from the file with no name. This doesn't seem to return anything from eof(), either, so it loops forever. Nice work, guys (admittedly, the htnotify code is bogus but not returning EOF on an empty file is bogus too).
So, if you plan to use htnotify, you should fix the code therein by applying this patch:
--- htnotify.cc.orig 2002-01-31 18:47:00.000000000 -0500 +++ htnotify.cc 2009-02-02 20:10:56.000000000 -0500 @@ -185,7 +185,7 @@ // define default preamble text - blank string preambleText = ""; - if (prefixfile != NULL) + if ((prefixfile != NULL) && (prefixfile != '\0')) { ifstream in(prefixfile); char buffer[1024]; @@ -212,7 +212,7 @@ postambleText << " http://www.htdig.org/meta.html\n\n"; postambleText << "Cheers!\n\nht://Dig Notification Service\n"; - if (suffixfile != NULL) + if ((suffixfile != NULL) && (suffixfile != '\0')) { ifstream in(suffixfile); char buffer[1024];
Once the patch is applied, rebuild htnotify with:
CXXFLAGS="-Wno-deprecated" make -e
Then, install htDig as super-duper user:
su make install
A single copy of htDig can be installed and shared by several Web sites on a single Web server. If you'd like all of the Web site config files to be retained in the htDig "conf" directory, it requires a kludge but it works. If you want all of the Web sites to use a single config file in the shared htDig "conf" directory, the same kludge will work in that case too. Or, you can use a separate copy of the config file in each of the Web sites' "db" directories. Its your choice.
Set up a "db" directory under the Web site's top level directory. Copy the rundig script from the htDig "bin" directory to the Web site's "db" directory. Hack it to point to the site's database directory and the htDig common directories. Here are the values to hack:
DBDIR=/var/www/BSMDev/db COMMONDIR=/usr/local/htdig/common BINDIR=/usr/local/htdig/bin
If you only want a single, shared config file, skip this step and proceed to hacking the config file. If you want separate config files for each of the Web sites, but all in the common htDig "conf" directory, copy htdig.conf to a file named for each of the Web sites:
cp /usr/local/htdig/conf/htdig.conf /usr/local/htdig/conf/BSMDev.conf
If you want a separate copy of the config file for each Web site, in the "db" directory specific to that Web site, copy htdig.conf there:
cp /usr/local/htdig/conf/htdig.conf /var/www/BSMDev/db
Note that you should then make a symlink from the htDig "conf" directory to the actual location of the config file, just as a pointer to remind yourself that hacking config files in the "conf" directory actually is hacking a file in the Web site's "db" directory:
cd /usr/local/htdig/conf ln -s /var/www/BSMDev/db/htdig.conf BSMDev.conf
Hack the config file, wherever it is, to configure the site or sites in question. Typical parameters to hack are:
database_dir: /var/www/BSMDev/db start_url: http://www.bsmdevelopment.com/ local_urls: http://www.bsmdevelopment.com/=/var/www/BSMDev/html/ local_urls_only: true local_default_doc: index.html welcome.html limit_urls_to: ${start_url} maintainer: ewilde@bsmdevelopment.com
You can change any of the other options, if you want.
If you are using a config file in the htDig "conf" directory (either a single file or multiple files), now comes the clever bit (sure, whatever). Make a hard link from the Web site's "db" directory to the config file in the htDig common config directory:
ln /usr/local/htdig/conf/BSMDev.conf /var/www/BSMDev/db/htdig.conf
or
ln /usr/local/htdig/conf/htdig.conf /var/www/BSMDev/db/htdig.conf
You must do this, because of the way that htDig processes the config files. If a hard link isn't used, htDig will not work in this shared mode.
If your cofig file is actually in the Web site's "db" directory and there is a soft link from the htDig "conf" directory, you need do nothing at this point because htDig will use the config file in the "db" directory.
Create specific versions of SearchWrapper.html, SearchSyntax.html and SearchNoMatch.html for the site in question. These files are invoked by htDig as part of the search process. They should be put in the Web site's "db" directory.
Copy the htsearch program from the htDig build directory to the top level (shared) cgi-bin directory (if it isn't already there):
cp htsearch/htsearch /var/www/cgi-bin
In the Web site's cgi-bin directory, make a symbolic link to htsearch in the shared cgi-bin directory:
ln -s ../../cgi-bin/htsearch /var/www/BSMDev/cgi-bin/htsearch
In the Web site's html directory, make a symbolic link to the installation directory where the htDig icons were installed (the image directory):
ln -s /var/www/html/htdig /var/www/BSMDev/html/htdig
Now we come to the problem of htfuzzy, the program that creates indexes for different "fuzzy" search algorithms. These indexes can then be used by the htsearch program to do fuzzy matches against search terms that are entered by the user.
If you wish the "fuzzy" match algorithms to work, you must build the databases that drive them, using htfuzzy. If you are going to use the "endings" algorithm, you must get the affix rules and language dictionary for the language of your choice (htDig comes with affix rules and a simple dictionary for English bundled) from the ispell Web page (http://fmg-www.cs.ucla.edu/fmg-members/geoff/ispell.html) and then run htfuzzy, which will build the databases in the common directory.
And, therein lies the problem. If you wish to use multiple languages for your different Web sites, or even better, multiple languages within a single Web site, you will be faced with coming up with some scheme for switching the common directory around on the fly. Good luck.
Fortunately, if you are just interested in English, you can do something like this (in this case for the "endings" and "synonyms" algorithms):
cd /usr/local/htdig/common /usr/local/htdig/bin/htfuzzy -c /usr/local/htdig/conf/htdig.conf \ endings synonyms
If you'll be adding or removing pages on you Web site(s) on a regular basis, you may want to install the following script (reindex) somewhere in your common scripts directory (e.g. /var/www/Scripts/reindex).
#!/bin/sh # # Shell script (run by cron) to check whether any Web pages in this directory # have changed and reindex them for searching, if so. # # This script takes one argument, the name of the Web directory under # /var/www that is to be indexed (e.g. MyDir). # # # Check to see whether any pages in the Web directory tree that we are given # are newer than the last indexed date. # ArePagesNew() { # # See if the timestamp file exists. If not, we are all done. If so, we # must look at the Web directory tree. # if [ -f /var/www/$1/db/previous_index ] then # # Run a find command that traverses the Web directory tree, looking for # any HTML files that are newer. # /usr/bin/find /var/www/$1/html -name \*\.html -type f \ -newer /var/www/$1/db/previous_index \ -exec touch -f /var/www/$1/db/current_index \; # # Compare the current stamp file with the previous stamp file. If their # times are different, there's been a change. # if [ /var/www/$1/db/current_index -nt /var/www/$1/db/previous_index ] then return 0 #New else return 1 #The same fi else touch -f /var/www/$1/db/current_index return 0 #New fi } # # If there is a reference directory, build an index page that points to all # of the reference material. This material is loaded dynamically and is # never directly linked to. Thus, the crawler will never find it unless # there is a pointer to it. The page we create is secretly linked to by the # top level index of this directory so that the crawler can find and index # all of the reference pages. # LinkRefPages() { # # See if the reference directory exists. If not, we are all done. If so, # we must look at the directory tree for reference documents. Currently # they are all documents that start with: # # DocIdx_ # Inst_ # PR_ # Samp_ # Tech_ # if ! test -d /var/www/$1/html/Reference; then return 0; fi # # Run a find command that traverses the reference directory, looking for any # HTML files that match the pattern. # /usr/bin/find /var/www/$1/html/Reference -name DocIdx_\\.html -type f \ -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \ >/var/www/$1/html/Reference/refindex_1.html \; /usr/bin/find /var/www/$1/html/Reference -name Inst_\*\.html -type f \ -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \ >>/var/www/$1/html/Reference/refindex_1.html \; /usr/bin/find /var/www/$1/html/Reference -name PR_\*\.html -type f \ -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \ >>/var/www/$1/html/Reference/refindex_1.html \; /usr/bin/find /var/www/$1/html/Reference -name Samp_\*\.html -type f \ -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \ >>/var/www/$1/html/Reference/refindex_1.html \; /usr/bin/find /var/www/$1/html/Reference -name Tech_\*\.html -type f \ -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \ >>/var/www/$1/html/Reference/refindex_1.html \; # # If we didn't find any files, we're all done. # if ! test -s /var/www/$1/html/Reference/refindex_1.html; then return 0; fi # # Adjust the index entries to be human/robot readable. # sed "s/@\/var\/www\/$1\/html\/Reference\///" \ /var/www/$1/html/Reference/refindex_1.html | sed "s/.html@//" \ | sed "s/\/var\/www\/$1\/html\/Reference/./" \ >/var/www/$1/html/Reference/refindex_2.html # # Start the file out with the requisite HTML. # echo \<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\> \ >/var/www/$1/html/Reference/refindex.html echo \<html\>\<head\> >>/var/www/$1/html/Reference/refindex.html echo \<meta name=\"robots\" content=\"All\"\> \ >>/var/www/$1/html/Reference/refindex.html echo \</head\>\<body\> >>/var/www/$1/html/Reference/refindex.html # # Include the index we generated. # cat /var/www/$1/html/Reference/refindex_2.html \ >>/var/www/$1/html/Reference/refindex.html # # Finish off the HTML page. # echo \</body\>\</html\> >>/var/www/$1/html/Reference/refindex.html # # Clean up and make the index visible. # rm -f /var/www/$1/html/Reference/refindex_1.html \ /var/www/$1/html/Reference/refindex_2.html chgrp webmin /var/www/$1/html/Reference/refindex.html return 0 } # # Check if any pages are newer than the last index time and reindex, if so. # if (ArePagesNew $1); then LinkRefPages $1 /var/www/$1/db/rundig touch -f /var/www/$1/db/previous_index fi
Add an entry to crontab to run the reindex script on a nightly basis:
# Reindex any Web pages that have changed since yesterday. Then, send # email notification of any expired pages that are found. 15 2 * * * root /var/www/Scripts/reindex BSMDev 15 3 * * * root /usr/local/htdig/bin/htnotify \ -c /var/www/BSMDev/db/htdig.conf
The reindex script will index all of the pages on the Web site, whenever a change is made, and build the htDig indexes. It will also build a page, for use by Web search engines, that references, through a link, all dynamically linked pages found on the site (i.e. those pages that appear on the site but are not specifically referenced directly by other pages on the site). This page can be pushed to the Web site's home directory (or elsewhere) so that a Web crawler from a search engine will find all of the dynamically linked pages (the page simply references all of the pages in a list and is never meant to actually be seen by users -- a secret, invisible link to it should be placed in one of the Web site's top level pages).
To add the secret, invisible link, put some HTML that resembles this on one of your site's pages (usually the reference directory's index):
<!-- Secret, hidden link to the generated reference index --> <span style="visibility: hidden;"> <a href="refindex.html">invisible</a> </span>
If you wish to index PDF files, as well as HTML files, you need to install Xpdf and doc2html.
Download the latest Xpdf tar file from http://www.foolabs.com/xpdf/ and untar it in the top level source directory:
tar -xvzf xpdf-3.02.tar.gz
It will create a new directory for that version of Xpdf. Switch to that directory and build it:
cd xpdf-3.02 ./configure make
Then, as super duper user, install it:
su make install
Download the latest doc2html file from http://www.htdig.org. Unfortunatlely, there is no tar file, just a zippity-do-dah file. So, you'll have to un-zip it using some kind of magic (or a kludge).
The files in the zip archive are just Perl script files so there's no compiling involved. To implement them, copy them to the proper locations:
su cp /rpm/htDig/doc2html_31/doc2html.pl /usr/local/bin chmod go+x /usr/local/bin/doc2html.pl cp /rpm/htDig/doc2html_31/pdf2html.pl /usr/local/bin chmod go+x /usr/local/bin/pdf2html.pl
Edit these two files per the instructions in DETAILS:
Wordpress can be installed such that a common installation is used by several Web sites. To do this, first get a copy of the install file from http://wordpress.org/download/. Untar the install file and rename the install directory to match the version number:
tar -xvzf wordpress-a.b.yy.tar.gz mv wordpress wordpress-a.b.yy
Create a shared top level directory in the Web tree and copy all of the Wordpress files there:
mkdir /var/www/Blog cp -r wordpress-a.b.yy/* /var/www/Blog mv /var/www/Blog/wp-config-sample.php /var/www/Blog/wp-config.php
Hack the config file to include the database name, user name and password. Add conditional code to select the database table prefix, depending on which Web site is invoking the blog. The table prefix should be a unique prefix for each of the Web sites so that all of the blogs can coexist in one database:
define('DB_NAME', 'Wordpress'); define('DB_USER', 'Wordpress'); define('DB_PASSWORD', 'secretpassword'); if (preg_match('/KittyBlog/', $SERVER['DOCUMENTROOT'])) $table_prefix = 'kb_'; else die('Unknown blog name! Check for your document root (' . $SERVER['DOCUMENTROOT'].') in wp-config.php');
Create a symbolic link from a blog directory under the Web site's top level HTML directory to the Wordpress shared top level directory:
ln -s /var/www/Blog /var/www/KittyBlog/html/blog
If you haven't already done so, add a virtual host to the Apache config file for the Web site. Point the PHP include path at the blog directory, if you are going to be invoking the blog via a Web page at another level (see index.php below):
/etc/httpd/conf/httpd.conf:
.
##
## Kitty Blog Virtual Host Context
##
Listen 8680
<VirtualHost default:8680> # # Document root directory for KittyBlog html. # DocumentRoot "/var/www/KittyBlog/html" <Directory "/var/www/KittyBlog/html"> Options +Includes </Directory> # # Blog directory used to point to Wordpress. # Alias /blog "/var/www/KittyBlog/blog" # # Define the properties of the directory above. # <Directory "/var/www/KittyBlog/blog"> AllowOverride None Options FollowSymLinks Order allow,deny Allow from all </Directory> # # Directories defined in the main server that we don't want people to see # under this port. # Alias /manual "/var/www/KittyBlog/limbo" Alias /doc "/var/www/KittyBlog/limbo" # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/var/www/KittyBlog/cgi-bin/" # # Define the properties of the directory above. # <Directory "/var/www/KittyBlog/cgi-bin"> AllowOverride None Options ExecCGI FollowSymLinks Order allow,deny Allow from all </Directory> # # Point the PHP include path at the blog directory. This lets us include # stuff without worrying about where we are running from. # php_value include_path ".:/var/www/KittyBlog/blog:/usr/local/lib/php" </VirtualHost> . . .
Create the MySQL database that will be used by all of the Wordpress blogs (change the secretpassword to something really super secret):
$ mysql -u adminusername -p password mysql> create database Wordpress; mysql> grant all privileges on Wordpress.* to Wordpress@localhost identified by "secretpassword"; mysql> flush privileges; mysql> quit
Invoke the blog from the blog directory (this is very important). For example:
http://192.168.1.1:8680/blog/
Follow the link to the install page and set up the blog.
If you'd like to invoke the blog from somewhere else (e.g. the top level of the html directory), copy the index.php file from the Web site's blog directory to the top level directory (or elsewhere):
cp /var/www/KittyBlog/blog/index.php /var/www/KittyBlog/html
Because PHP is brain dead, you will have to hack this file to load from the library list:
require('wp-blog-header.php');
This will use include_path set up in the httpd.conf file to find the relative include in the blog directory via the library search list. Users can run the blog via the blog directory or the copied file. You can also invoke the blog from any other PHP file in this manner or you can link to it in the blog directory.
Finish off your blog by choosing a theme. Log in to the blog as the admin user and, from the Dashboard page, choose the Appearance menu item. You'll be shown a list of themes that you can choose from. Pick the one that you like and you're in business.
If you'd like a custom theme (i.e. you'd like to hack the page heading, author's picture, links, etc., etc.) you can easily make one up. Begin by selecting one of the many, many Wordpress templates that are available on the Internet or one of the standard templates that come pre-installed. Create a new theme directory under your common blog directory, copy the theme to it, and give it the proper permissions:
mkdir /var/www/Blog/wp-content/themes/KittyBlog cp -R /my/themes/crazy-kitty/* /var/www/Blog/wp-content/themes/KittyBlog chown wordpress:joe -R /var/www/Blog/wp-content/themes/KittyBlog chmod g+w -R /var/www/Blog/wp-content/themes/KittyBlog
Hack the style.css file in the custom theme directory to identify the new theme (Wordpress uses the comments at the top of this file to identify it, when it is displayed to the administrator, to assist them in choosing which theme they want). You hacks should look something like this:
style.css:
/* Theme Name: Kitty Blog Theme URI: http://www.kittyblog.com/ Description: A theme based on Kubrick v1.2.5 for WordPress 2.x that \ includes a gray head on a light gray background in the \ space above the page. Title is fixed at "Kitty Blog". \ There is room for an author description. Version: 2.0 rev. g Author: Kitty Kat Author URI: http://www.kittyblog.com/ */ /* This is the "Kitty Blog" Template For WordPress 2.x designed by Kitty http://www.kittyblog.com This template is based entirly upon the amazing work of Michael Heilemann which inluces his source completely. said source: Kubrick v1.2.5 for WordPress 1.2 http://binarybonsai.com/kubrick/ For "Kitty Blog" layout and design support, please contact http://www.kittyblog.com
The CSS, XHTML and design is released under GPL: http://www.opensource.org/licenses/gpl-license.php
*** regarding images *** All CSS that involves the use of images, can be found in the 'index.php' file. This is to ease installation inside subdirectories of a server. *** MORE ABOUT IMAGES*** This file contains images not provided in the orignal distribution of Kubrick 1.2.5. All images in the provided "/images" folder at also presented in .psd format for your modification pleasure.
/*
Have fun, and don't be afraid to contact either one of us if you have questions, comments or praise. */ . . .
Of course, you can hack the colors and other attributes that are set within the style sheet to make your blog appear as you'd like it.
If you'd like to show a picture of yourself on the front page of the blog, you can replace author-photo.gif in the images sub-directory under the custom theme directory. Of course, this may not be a good idea, if the FBI is looking for you but, who are we to tell you how to run your blog.
To put the finishing touches on your blog, you'll probably want to edit the header.php file that is found in the custom theme directory. Here you can change the site's navigation menu to link to custom pages or a description of the site. For example, you might add:
header.php:
<ul id="navigation"> <li><strong>Navigation Menu:</strong></li> <li><a href="<?php bloginfo('url'); ?>/?page_id=10" \ title="An archive of classic kitty tales from days gone by"> Classic Tales</a></li> <!-- <li><a href="*** REGARDING IMAGES ***" \ title="this is a special title popup for extra clarity"> your link 2</a></li> <li><a href="http://www.yourthirdlink" \ title="this is a special title popup for extra clarity"> your link 3</a></li> --> <li><a href="http://www.kittyblog.com/blog/?page_id=2" \ title="About Kitty Blog"> About</a></li> </ul>
Coppermine can be installed such that a common installation is used by several Web sites. To do this, first get a copy of the install file from http://coppermine-gallery.net/index.php. Unzip the install file, using Winduhs unzip, and rename the install directory to match the version number:
mv cpgabyy cpg-a.b.yy
Create a shared top level directory in the Web tree and copy all of the Coppermine files there:
mkdir /var/www/Coppermine cp -r cpg-a.b.yy/* /var/www/Coppermine
Create a Coppermine directory under the Web site's top level directory and create symbolic links to the shared Coppermine directory:
mkdir /var/www/mysite/coppermine find /var/www/Coppermine -maxdepth 1 \ -exec ln -s \{\} /var/www/mysite/coppermine \; rm -f /var/www/mysite/coppermine/albums rm -f /var/www/mysite/coppermine/include mkdir /var/www/mysite/coppermine/include find /var/www/Coppermine/include -maxdepth 1 \ -exec ln -s \{\} /var/www/mysite/coppermine/include \;
Copy the non-shared files, that Coppermine hacks, to the Web site's Coppermine directory:
mkdir /var/www/mysite/coppermine/albums cp -r /var/www/Coppermine/albums/* /var/www/mysite/coppermine/albums
Change the permissions (you will need to do this as root) on some of the Coppermine directories so that it can store photos in them:
su chgrp apache /var/www/mysite/coppermine/include chgrp apache /var/www/mysite/coppermine/albums chgrp apache /var/www/mysite/coppermine/albums/edit chgrp apache /var/www/mysite/coppermine/albums/userpics chmod g+w /var/www/mysite/coppermine/include chmod g+w /var/www/mysite/coppermine/albums chmod g+w /var/www/mysite/coppermine/albums/edit chmod g+w /var/www/mysite/coppermine/albums/userpics
Create the MySQL database that will be used by all of the Coppermine photo galleries (change the secretpassword to something really super secret):
$ mysql -u adminusername -p password mysql> create database Coppermine; mysql> grant all privileges on Coppermine.* to Coppermine@localhost identified by "secretpassword"; mysql> flush privileges; mysql> quit
Fire up the Coppermine install from the Web site:
http://your.web.site/coppermine/install.php
We prefer the original Coppermine installation to the wizard. So, pick that on the first page.
Set the administrator's username, password and email address in these fields:
Username: (we like admin) Password: Email address:
Set the database name, coppermine username, and coppermine user password in these fields:
MySQL Database Name: MySQL Username: MySQL Password:
Then, set the table prefix to something that is a mneumonic of the Web site to which it applies. For example: "mysite_".
If you've installed ImageMagick, set the path to the "convert" program in the field:
ImageMagick path:
Typically, if you built ImageMagick with the defaults, it will be found in:
/usr/local/bin/
Once install has run, login as the administrator and change some settings. Here are the ones we set:
General Settings/ Gallery name: My Site Photo Gallery Gallery description: My Site online photo album Timezone difference relative to GMT: -5 Themes settings/ Theme: curve Files and thumbnails settings/ Max size for uploaded files (KB): 16384 Auto resize images that are larger than max width or height: Yes:Everyone Custom fields for image description/ (if you want custom fields) Field 1 name: Field 2 name: Field 3 name: Field 4 name: