我爱编程

安装并部署node应用

2018-05-24  本文已影响0人  yahveyeye

** 官方安装包**
Install an LTS Node
Install node 4, the current LTS release.
The lovely people at NodeSource make official packages of node for most distros. We've included the more popular instructions below.
RHEL / CentOS

$ sudo curl -sL https://rpm.nodesource.com/setup | sudo bash -
$ sudo yum install -y nodejs
$ sudo yum install -y gcc-c++ make

使用淘宝npm镜像

  1. 通过config命令

npm config set registry https://registry.npm.taobao.org
npm info underscore (如果上面配置正确这个命令会有字符串response)

  1. 命令行指定

npm --registry https://registry.npm.taobao.org info underscore

  1. 安装cnpm镜像

npm i -g cnpm

使用 npm shrinkwrap 来管理项目依赖

npm shrinkwrap

利用node自带的https技术栈

参考
关于HTTPS
https://github.com/certsimple/ssltest

生成自签名证书

openssl genrsa -out key.pem
openssl req -new -key key.pem -out csr.pem
openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem
rm csr.pem

** give an executable permission to run on low ports**
Want to make your web app be able to access port 443 or 80 without running it as root? Linux added the ability a decade ago, which is actually super recent in Unix time:
We use node in this example, but you can (and should) use this for anything else.
# Since this might be a symlink, and capabilities only apply to real files

NODE_EXECUTABLE=$(readlink -f $(which node))

Then add the cap_net_bind_service
capability:
# ep is 'effective, permitted' - see http://linux.die.net/man/3/cap_from_textsudo

setcap 'cap_net_bind_service=+ep' $NODE_EXECUTABLE

You can see the capability applied with:

getcap $NODE_EXECUTABLE
/usr/local/node-v4.1.1-linux-x64/bin/node = cap_net_bind_service+ep

Your executable now has permission to bind to low ports as a regular user.

Check your work at SSL Labs - you should get at least an A.

大型应用

Nginx

参考(https://www.liberiangeek.net/2014/10/install-latest-version-nginx-centos-7/)安装最新版nginx

sudo vi /etc/yum.repos.d/nginx.repo

nginx isn’t part of CentOS default repositories. In order to get Nginx, you must install and add additional external repositories.
Then copy and paste the lines below into the file and save it.

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

If you want Nginx stable version, then here’s the repository for it.

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

issue command to install it

sudo yum install -y nginx
sudo systemctl enable nginx

sudo systemctl start nginx

Allowing HTTP Traffic

By default HTTP traffic is not allowed to CentOS 7. To enable it, open the firewall to allow it through. To do that, run the commands below.

sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload

配置
to be continue.

上传代码

cd /opt/
sudo mkdir app
sudo chown linyuan app
git clone git@github.com:nodejsnewbie/base-express.git app
cd app
npm install

var express = require('express') ,
app = express() ,
port = process.env.PORT || 3000
app.set('views', __dirname + '/views')
app.engine('jade', require('jade').__express)
app.set('view engine', 'jade')
app.use(require('./controllers'))
app.listen(port, function() {
console.log('Listening on port ' + port)
})

After the changes our node application can no more serve by itself its static files in the public/ folder.
Next, please replace the views/index.jade with the following

doctype html
html
head
title Your basic web app structure
link(href="/public/css/style.css", rel="stylesheet")
body
h1 Welcome to your basic web app structure
p
| If the title above is red
| then Nginx is serving static files!

The last file to change is public/css/styles.css

h1 { color: red;}

Services on all popular Linux distributions now use systemd. This means we don’t have to write shell scripts, explore the wonders of daemonization, changing user accounts, clearing environment variables, set up automatic restarts, log to weird syslog locations like ‘local3', and a bunch of other stuff.
Instead, we just make a .service file for the app and let the OS take care of these things. Here’s an example one, called myapp.service:

[Service]
ExecStart=/usr/bin/node /opt/app/app.js
Restart=always
StandardOutput=syslog
StandardError=syslogSyslog
Identifier=node-app-1
User=your_app_user_name
Group=your_app_user_name
Environment=NODE_ENV=production
PORT=5000

[Install]
WantedBy=multi-user.target

this small file tells systemd to restart the service when it dies, to use syslog for logging all output and to provide 5000 as a port, as well as a few other things.

  1. Put this in /etc/systemd/system/node-app-1.service but don’t forget to replace your_app_user_name with the appropriate user name.

2.Then create one more file as the above in /etc/systemd/system/node-app-2.servicewith two minor differences. Instead of SyslogIdentifier=node-app-1
use SyslogIdentifier=node-app-2
and change PORT=5000 to PORT=5001

Then run the following to start both instances of our node application

systemctl start node-app-1
systemctl start node-app-2

The first instance will be accepting requests at port 5000, where as the other one at port 5001. If any of them crashes it will automatically be restarted.

To make your node app instances run when the server starts do the following

systemctl enable node-app-1
systemctl enable node-app-2

In case there are problems with any of the following commands above you can use any of these two:

sudo systemctl status node-app-1
sudo journalctl -u node-app-1

The first line will show your app instance current status and whether it is running. The second command will show you all logging information including output on standard error and standard output streams from your instance.
Use the first command right now to see whether your app is running or whether there has been some problem starting it.

firewall-cmd --permanent --zone=public --add-service=http

firewall-cmd --permanent --zone=public --add-port=5000/tcp

firewall-cmd --reload

if we have some new application code in our repository, all you have to do is the following:

cd /opt/app
git pull
sudo systemctl restart node-app-1
sudo systemctl restart node-app-2

And the latest version will be ready to serve your users.

  1. 安装mongodb
    参考

Listening on ports 5000 & 5001 is nice but by default browsers are looking at port 80. Also in our current setup no static files are served by our application.

Here is our nginx configuration

upstream node_server {
server 127.0.0.1:5000 fail_timeout=0;
server 127.0.0.1:5001 fail_timeout=0;
}
server { listen 80 default_server;
listen [::]:80 default_server;
index index.html index.htm;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off; proxy_buffering off;
proxy_pass http://node_server;
}
location /public/ { root /opt/app;
}
}

This configuration will make available all static files from /opt/app/public/ at the /public/ path. It will forward all other requests to the two instances of our app listening at the ports 5000 and 5001. Basically, Nginx is both a web server and load balancer.
To use this configuration save it in /etc/nginx/conf.d/node-app.conf and then in your /etc/nginx/nginx.conf file remove completely the default server section below the include /etc/nginx/conf.d/*.conf line.
All you have to do now is to restart nginx for your latest configuration to take effect.

$ sudo systemctl restart nginx

If the application uses websockets, the following lines have to be added to the Nginx configuration:

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;

and Nginx has to be reloaded:

systemctl reload nginx

This is just the tip of the iceberg when hosting and deploying node applications.

One thing that can be improved is to create a new user specifically for the node app. This will make the application more secure, as it will have very minimal access besides what it needs.

Something else which you could do, is take everything above and create an Ansible playbook out of it.

Ansible is a great tool to configure and orchestrate servers. It’s really simple. Using this playbook you will be able to launch & deploy even hundreds of servers.
We've now got a working, basic environment! Want to build an Ansible playbook or Dockerfile from the above? Go for it - and let us know!

上一篇下一篇

猜你喜欢

热点阅读