Cybex网络节点搭建

2018-11-07  本文已影响0人  Cobb_chang

原本没有计划搭建节点的,但是前两天想要获取市场K线的时候,官方说官方现有节点只能保存一千条K线记录,如果需要获得更久的交易数据,就要自己搭建节点来保存。可参考下面与官方沟通的记录:

https://github.com/CybexDex/cybex-node-doc/issues/3

所以计划自己搭建节点,可以参考官方的教程:

https://github.com/CybexDex/how-to-run-cybex-node

复述一下大致流程:

1、新建一个文件夹

2、下载创始区块文件genesis.json

3、下载配置文件

4、运行。

其中,配置文件和运行的参数值得说一下:

配置文件:config.ini

'''

# Endpoint for P2P

node to listen on P2P网络节点服务器端配置,表示本地监听的IP端口

p2p-endpoint= 0.0.0.0:5000

#

P2P nodes to connect to on startup (may specify multiple times) 用于连接到P2P网络的第一个节点,seed_node只是在启动时用于发现第一个节点。程序会自动发现所有网络中可以连接的节点,并进行区块同步

seed-node= 47.100.231.66:5000

#JSON array of P2P nodes to connect to on startup

seed-nodes= []

#Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.

#checkpoint =

#

Endpoint for websocket RPC to listen on #给钱包使用的rpc端口,相当于对外开放的端口

rpc-endpoint= 23.83.229.135:8090

#

Endpoint for TLS websocket RPC to listen on#同上,貌似这个是wss?上面是ws?待验证

#rpc-tls-endpoint =

#

The TLS certificate file for this server#window下,这个是个坑,需要下载个cacert.pem文件,并且设置set SSL_CERT_FILE=d:/cacert.pem

#server-pem =

#Password for this certificate

#server-pem-password =

#File to read Genesis State from

#genesis-json =

#Block signing key to use for init witnesses, overrides genesis file

#dbg-init-key =

#JSON file specifying API permissions

#api-access =

#Enable block production, even if the chain is stale.

enable-stale-production= false

#Percent of witnesses (0-99) that must be participating in order to produceblocks

required-participation= false

#ID of witness controlled by this node (e.g. "1.6.5", quotes arerequired, may specify multiple times)

#witness-id =

#Tuple of [PublicKey, WIF private key] (may specify multiple times)

#Account ID to track history for (may specify multiple times)

#track-account =

#Keep only those operations in memory that are related to account historytracking

partial-operations= true

#Maximum number of operations per account will be kept in memory

max-ops-per-account= 50

#

Track market history by grouping orders into buckets of equal size measured in

seconds specified as a JSON array of numbers#需要存取的K线,5分钟,1小时,1天

bucket-size= [300,3600,86400]

#

How far back in time to track history for each bucket size, measured in the

number of buckets (default: 1000)#每种K线存取多少条记录。

history-per-size= 1000000000

#declare an appender named "stderr" that writes messages to theconsole

[log.console_appender.stderr]

stream=std_error

#declare an appender named "p2p" that writes messages to p2p.log

[log.file_appender.p2p]

filename=logs/p2p/p2p.log

#filename can be absolute or relative to this config file

#route any messages logged to the default logger to the "stderr"logger we

#declared above, if they are info level are higher

[logger.default]

level=info

appenders=stderr

#route messages sent to the "p2p" logger to the p2p appender declaredabove

[logger.p2p]

level=info

appenders=p2p

'''

可以参考链接:

https://www.jianshu.com/p/9a58ad875cc3

上一篇下一篇

猜你喜欢

热点阅读