spring-cloud入门环境搭建

1.什么是spring-cloud

spring-cloud是spring提供的微服务整合开发框架。Spring Cloud 为开发者提供了在分布式系统(如配置管理、服务发现、断路器、智能路由、微代理、控制总线、一次性 Token、全局锁、决策竞选、分布式会话和集群状态)操作的开发工具。使用 Spring Cloud 开发者可以快速实现上述这些模式。

2.为什么使用spring-cloud

  1. 经历过netflix业务考验,国外大规模使用
  2. 入门门槛低,国内大批量使用spring
  3. 快速搭建

3.spring-cloud快熟搭建入门

1. eureka 服务注册组件

image
下载 https://github.com/mykite/eureka-server.git
编译后直接运行即可,或 mvn clean install 后直接运行jar包后访问
部署后:
_20160809115004

2. configServer

对配置的集中管理,使用svn or git
https://github.com/mykite/configserver.git
编译后直接运行即可,或 mvn clean install 后直接运行jar包后访问
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
使用方式
在configserver中配置的
spring:
cloud:
config:
server:
git:
uri: https://github.com/mykite/config-repostory
提交到test分支文件hell-server.yml
文件内容:
test.name: kite
访问:http://localhost:8888/hello-server/profiles/test
会访问当前配置github上的test分支下的hello-server.yml(or properties文件)
对应应用中的配置
spring:
cloud:
config:
uri: http://localhost:8888
label: test
可以实现注入

3. ribbon

ribbon用以实现负载均衡;实现软负载均衡,核心有三点:

  1. 服务发现,发现依赖服务的列表
  2. 服务选择规则,在多个服务中如何选择一个有效服务
  3. 服务监听,检测失效的服务,高效剔除失效服务

服务选择规则,其中包括:

  • 简单轮询负载均衡
  • 加权响应时间负载均衡
  • 区域感知轮询负载均衡
  • 随机负载均衡
    _20160810095858

4. hystrix

断路器

1495376295-5714a86048d32_articlex

5. zuul

类似nginx,提供反向代理的功能

2240067315-5714a89f0ce51_articlex

项目搭建

项目结构

image
springcloud-server 提供的服务
springcloud-client 通过feginClient调用服务
springcloud-feginclient 通过feginClient调用server
springcloud-parent maven父项目

parent

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.3.5.RELEASE</version>
</parent>
<groupId>com.kite.test</groupId>
<artifactId>springcloud-parent</artifactId>
<version>1.0.0</version>
<packaging>pom</packaging>
<name>springcloud-parent</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<modules>
<module>../springcloud-client</module>
<module>../springcloud-server</module>
<module>../springcloud-feginclient</module>
</modules>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Brixton.SR4</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zuul</artifactId>
</dependency>
</dependencies>
</project>
server

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<packaging>jar</packaging>
<name>springcloud-client</name>
<artifactId>springcloud-server</artifactId>
<url>http://maven.apache.org</url>
<parent>
<groupId>com.kite.test</groupId>
<artifactId>springcloud-parent</artifactId>
<version>1.0.0</version>
</parent>
</project>

提供的服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package com.kite.test.springcloud.controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
/**
*
* 类HelloController.java的实现描述:暴露对外服务
* @author pengliang 2016年8月8日 下午4:23:14
*/
@RestController
public class HelloController {
/**
* rest 服务用来测试
* --@requestParam url?xxx=name
* --requestBody 认定为json传输解析 url?{xxx=name}
* @param name
* @return
*/
@RequestMapping(value = "/hello", method = RequestMethod.GET)
public String hello(String name) {
return "{hello: '" + name + "'}";
}
}

启动类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
package com.kite.test.springcloud;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
//springBoot 作为主启动类
@SpringBootApplication
@EnableDiscoveryClient
@EnableCircuitBreaker
public class ServerApplication {
public static void main(String[] args) {
SpringApplication.run(ServerApplication.class, args);
}
}
feginClient

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<packaging>jar</packaging>
<name>springcloud-feginclient</name>
<artifactId>springcloud-feginclient</artifactId>
<url>http://maven.apache.org</url>
<parent>
<groupId>com.kite.test</groupId>
<artifactId>springcloud-parent</artifactId>
<version>1.0.0</version>
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
</project>

feginClient提供接口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
package com.kite.test.springcloud.feginclient;
import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
/**
* feginClient接口
* 类HelloFeginClient.java的实现描述:通过feginClient自动调用
* @author pengliang 2016年8月8日 下午4:25:36
*/
@FeignClient(value="HelloServer") //对应到的server端的spring.application.name
public interface HelloFeginClient {
@RequestMapping(value = "/hello", method=RequestMethod.POST)
public String hello(@RequestParam(name="name") String name);
}
cliet

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<packaging>jar</packaging>
<name>springcloud-client</name>
<artifactId>springcloud-client</artifactId>
<url>http://maven.apache.org</url>
<parent>
<groupId>com.kite.test</groupId>
<artifactId>springcloud-parent</artifactId>
<version>1.0.0</version>
</parent>
<dependencies>
<dependency>
<groupId>com.kite.test</groupId>
<artifactId>springcloud-feginclient</artifactId>
<version>1.0.0</version>
</dependency>
</dependencies>
</project>

client 调用服务类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package com.kite.test.springcloud.client.controller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.kite.test.springcloud.feginclient.HelloFeginClient;
/**
* 调用测试
* 类CallHelloController.java的实现描述:调用feginClient测试
* @author pengliang 2016年8月8日 下午4:42:14
*/
@RestController
public class CallHelloController {
private Logger log = LoggerFactory.getLogger(CallHelloController.class);
@Autowired
private HelloFeginClient helloFeginClient;
@RequestMapping(value="/hello", method = RequestMethod.GET)
public String hello(String name) {
log.info("call hello parameter:{}", name);
return helloFeginClient.hello(name);
}
}

client 启动类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
package com.kite.test.springcloud.client;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;
@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients(basePackages = "com.kite.test")
@EnableCircuitBreaker
public class ClientApplication {
public static void main(String[] args) {
SpringApplication.run(ClientApplication.class, args);
}
}
调用流程图

image

应用实例:

在具体的微服务用力中我们一般采用json来作为数据传输格式,通过feginClient来对服务调用来做一层封装hystrix在对feginClient调用时对依赖失败做隔离,ribbon做负载均衡(使用feginClient时已经默认集成ribbon)

项目源码 https://github.com/mykite/springcloud-test-compoments.git

搭建redis集群环境

1
伪集群,单机上搭建,没多的机器

说明

1
2
3
Redis3.0版本之后支持Cluster
当前版本3.2.8
redis集群搭建最少6个节点其中3个主节点

修改配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
接上文redis环境搭建
redis1配置
打开集群配置
port 7001
cluster-enabled yes
cluster-config-file nodes-7001.conf
cluster-node-timeout 5000
redis2配置
port 7002
cluster-enabled yes
cluster-config-file nodes-7002.conf
cluster-node-timeout 5000
redis3配置
port 7003
cluster-enabled yes
cluster-config-file nodes-7003.conf
cluster-node-timeout 5000

image

启动redis

1
2
3
4
5
6
cd redis1
./redis-server redis.conf
cd redis2
./redis-server redis.conf
cd redis3
./redis-server redis.conf

查看服务是否正常

1
ps -ef|grep redis 查看redis进程

image

创建集群

1
官方有提供工具redis-trib.rb,采用ruby编写,so安装ruby

安装ruby

1
yum -y install ruby ruby-devel rubygems rpm-build

通过ruby命令工具安装redis

1
gem install redis

通过redis-trib.rb 创建集群

查看命令参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@vultr src]# ./redis-trib.rb
Usage: redis-trib <command> <options> <arguments ...>
set-timeout host:port milliseconds
reshard host:port
--pipeline <arg>
--to <arg>
--yes
--slots <arg>
--from <arg>
--timeout <arg>
del-node host:port node_id
add-node new_host:new_port existing_host:existing_port
--slave
--master-id <arg>
fix host:port
--timeout <arg>
help (show this help)
rebalance host:port
--pipeline <arg>
--simulate
--auto-weights
--use-empty-masters
--weight <arg>
--timeout <arg>
--threshold <arg>
check host:port
info host:port
create host1:port1 ... hostN:portN
--replicas <arg>
import host:port
--replace
--copy
--from <arg>
call host:port command arg arg .. arg
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

创建集群,出错

1
2
3
4
5
6
7
8
./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 3 nodes and 1 replicas per node.
*** At least 6 nodes are required.
至少需要6个节点

开始创建节点
image

重新创建redis集群
image
image

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1.通过redis-cli链接redis redis-cli -c -p 7001
[root@vultr bin]# redis-cli -c -p 7001
127.0.0.1:7001> set name kite
-> Redirected to slot [5798] located at 127.0.0.1:7002
OK
127.0.0.1:7002> get name
"kite"
127.0.0.1:7002> exit
[root@vultr bin]# redis-cli -c -p 7003
127.0.0.1:7003> get name
-> Redirected to slot [5798] located at 127.0.0.1:7002
"kite"
127.0.0.1:7002>
已经进行同步

参考

Redis 3.2.1集群搭建

springboot使用redis

项目创建

通过maven创建项目

添加依赖

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.kite.springboot</groupId>
<artifactId>redis</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>redis</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.7</java.version>
</properties>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.1.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

添加配置

application.yml

1
2
3
4
5
6
7
8
9
spring.redis:
database: 0
host: 45.32.112.158
port: 7001
pool:
max-idle: 8
min-idle: 0
max-active: 8
max-wait: -1

测试使用

出现问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package com.kite.springboot.redis;
import java.util.Set;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* Hello world!
*
*/
@SpringBootApplication
@RestController
public class App {
@Autowired
StringRedisTemplate stringRedisTemplate;
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
@GetMapping("/test")
public String test() {
Set<String> keys = stringRedisTemplate.keys("aa");
System.out.println(keys);
return "ok";
}
}
1
2
3
4
5
connet refersh
链接不上,注释掉bind(测试使用,生产请勿关闭)
DENIED Redis is running in protected mode because protected mode is enabled
保护模式开启,修改为关闭(测试使用,生产请勿关闭)
正常连接输出

注释bind后显示*
image

redis集群环境搭建

下载redis

下载地址

下载redis-stable.tar.gz版本,当前为3.2版本

安装

1
2
3
tar -zxvf redis-stable.tar.gz
cd redis-stable
make PREFIX=/usr/local/redis1 & install (指定安装目录)

PS:安装的坑

1
2
3
4
查看gcc版本
gcc -v 不能低于4.2
升级gcc
yum update gcc 会升级到4.4.7

配置

redis 配置详解

  • NETWORK 网络配置项

    1
    2
    3
    4
    5
    6
    bind 127.0.0.1(绑定的主机地址)
    protected-mode yes(是否开启网络保护模式,默认开启)
    port 6379(redis服务端口,默认6379)
    tcp-backlog 511
    timeout 0(该参数表示当某一个客户端连接上来并闲置timeout(单位秒)的时间后,Redis服务端就主动关闭这个客户端连接。该配置参数的默认值为0,表示关闭这个功能。)
    tcp-keepalive 300(客户端TCP连接的健康性检查,如果不设置为0就表示Redis服务端会定时发送SO_KEEPALIVE心跳机制检测客户端的反馈情况。该配置的默认值为300秒,既是300秒检测一次。健康性检查的好处是,在客户端异常关闭的情况下,Redis服务端可以发现这个问题,并主动关闭对端通道。这个参数建议开启)
  • GENERAL 一般配置项

    1
    2
    3
    4
    5
    6
    7
    8
    9
    daemonize no(当为yes的时候,以守护进程的模式运行。该参数的默认值为 no,主要目的是为了在测试环境下调试方便;当运行在生产环境时,可以将这个选项配置为yes)
    supervised no
    pidfile /var/run/redis_6379.pid
    loglevel notice
    logfile ""
    syslog-enabled no
    syslog-ident redis
    syslog-facility local0 (指定syslog工具。必须是用户或LOCAL0-LOCAL7之间,默认没有打开)
    databases 16
  • SNAPSHOTTING 快照配置项

    1
    2
    3
    4
    5
    6
    7
    8
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    rdbchecksum yes
    dbfilename dump.rdb
    dir ./
  • REPLICATION 复制,高可用配置项

    1
    2
    3
    4
    slaveof <masterip> <masterport>
    masterauth <master-password>
    slave-serve-stale-data yes
    slave-read-only yes
  • SECURITY 安全配置项

  • LIMITS 资源限制配置项
  • APPEND ONLY MODE 附加配置
  • LUA SCRIPTING lua脚本配置项
  • REDIS CLUSTER 集群配置项
  • SLOW LOG 日志配置项
  • LATENCY MONITOR 监控配置项
  • EVENT NOTIFICATION 事件通知配置项
  • ADVANCED CONFIG 高级配置项

太懒写不下去了需要具体看的请查看redis.conf文件

复制地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
# Redis configuration file example.
## Redis配置文件示例
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
## 注意:为了读取配置文件,Redis必须把配置文件路径作为第一参数:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
## 单位注意:当需要指定内存大小的时候,需要指定1k 5GB 4m等类似这种的单位:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
## 单位对大小写不敏感,所以 1GB 1Gb 1gb是一样的
################################## INCLUDES(引入) ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
## 在这里引入一个或更多个配置文件。假如你有一个标准模板供所有Redis使用,但是对每个单独的Redis
## 又有单独的配置。在这里引入其他的文件,是非常有用的
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
## 请注意:include是不能被来自admin或Redis Sentinel的CONFIG REWRITE命令所改写的。所以Redis总是
## 最后一条加工线作为配置的指令值,你最好把include文件操作放在前面,去避开在运行期间的覆写操作
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
## 相反,如果你想用include去覆写配置项,你最好把它放在后面
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## NETWORK(网络) #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
## 默认情况,如果没有bind配置,Redis服务端会监听外界所有的可用的连接。如果可以的话,最好
## 用bind配置一个或多个确定的地址。
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
## --警告-- 如果运行Redis的计算机直接暴露于外网,接受所有的接口是非常危险的,将会把实例暴露给网络上
## 的每个人。所有,默认情况下,我们注释掉了下面的bind命令,这样强制Redis只监听本机的接口(意思就是
## Redis只能来自接受运行着服务端的计算机的客户端连接,注:就是只能本机连接)
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
## 如果你非常肯定你想要你的实例监听所有的接口,请注释掉下面这一行吧
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
## 保护模式是一层安全保障,为了避开可以接触到的网络上的Redis实例
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
## 当安全模式会开启,并且1) 服务端没有显示指定任何bind、2)没有配置密码时,
## 服务器将只会接受来本地IP的连接和Unix domain sockets
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
# 默认情况下,安全模式是启用的,如果你想用来自其他主机的客户端连接他,你应该禁止它(下面的两句太啰嗦,没翻译)
protected-mode no
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
## 监听端口,如果设为0,Redis将不会监听TCP连接
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
## 此参数确定了TCP连接中已完成队列(完成三次握手之后)的长度, 当然此值必须不大于
## Linux系统定义的/proc/sys/net/core/somaxconn值,默认是511,而Linux的默认参数值是128。
## 当系统并发量大并且客户端速度缓慢的时候,可以将这二个参数一起参考设定。(不懂意思,从网上复制的)
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
## 指定 Unix socket 的路径,如果没指定,Redis将不会监听Unix socket
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
# 客户端与服务端的连接超时时间,0为如不超市
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
## 周期性的使用SO_KEEPALIVE检测客户端是否还处于健康状态,避免服务器一直阻塞,
## 官方建议300秒(从3.2.1版本开始的)
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
## 默认情况,Redis不会以后台守护进程方式启动,如果你需要设成"yes",当设置后,
## Redis会写一个pid 文件,在/var/run/redis.pid
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
## 好像是设成开机启动后,系统监控等东东,我对Linux一知半解,不翻译了
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
## 如果没设置后台守护进程,且没指定pidfile,则没有pid文件被创建
## 如果设置了后台守护进程,则会创建/var/run/redis.pid
## 如果没设置后台守护进程,且指定了pidfile,则以pidfile为准
## 创建pid文件是尽力服务行为,意思就是如果没创建成功也无所谓不耽误Redis正常启动和运行
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 定义日志级别。
# 可以是下面的这些值:
# debug (适用于开发或测试阶段)
# verbose (比debug少点,但是也不少)
# notice (适用于生产环境)
# warning (仅仅一些重要的消息被记录)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
## 指定日志文件位置
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
## 数据库数量,默认16个(0-15)
databases 16
################################ SNAPSHOTTING(快照) ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
#
# 存 DB 到磁盘:
#
# 格式:save <间隔时间(秒)> <写入次数>
#
# 根据给定的时间间隔和写入次数将数据保存到磁盘
#
# 下面的例子的意思是:
# 900 秒内如果至少有 1 个 key 的值变化,则保存
# 300 秒内如果至少有 10 个 key 的值变化,则保存
# 60 秒内如果至少有 10000 个 key 的值变化,则保存
#  
# 注意:你可以注释掉所有的 save 行来停用保存功能。
# 也可以直接一个空字符串来实现停用:
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
# 默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,
# 这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,
# 否则就会没人注意到灾难的发生。
#
# 如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。
#
# 然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好了。
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
# 是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串
# 默认都设为 yes
# 如果你希望保存子进程节省点 cpu ,你就设置它为 no ,
# 不过这个数据集可能就会比较大
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
# 是否校验rdb文件
rdbchecksum yes
# The filename where to dump the DB
# 设置 dump 的文件位置
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
# 工作目录
# 例如上面的 dbfilename 只指定了文件名,
# 但是它会写入到这个目录下。这个配置项一定是个目录,而不能是文件名。
dir ./
################################# REPLICATION(主从复制) #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
# 当一个 slave 与 master 失去联系,或者复制正在进行的时候,
# slave 可能会有两种表现:
#
# 1) 如果为 yes ,slave 仍然会应答客户端请求,但返回的数据可能是过时,
# 或者数据可能是空的在第一次同步的时候
#
# 2) 如果为 no ,在你执行除了 info he salveof 之外的其他命令时,
# slave 都将返回一个 "SYNC with master in progress" 的错误,
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 从机是否只读
slave-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
# 无硬盘备份
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
# 备份等待延迟时间
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# 设置主从复制容量大小。这个 backlog 是一个用来在 slaves 被断开连接时
# 存放 slave 数据的 buffer,所以当一个 slave 想要重新连接,通常不希望全部重新同步,
# 只是部分同步就够了,仅仅传递 slave 在断开连接时丢失的这部分数据。
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
# 当 主机 不能正常工作的时候,Redis Sentinel 会从 slaves 中选出一个新的 master,
# 这个值越小,就越会被优先选中,但是如果是 0 , 那是意味着这个 slave 不可能被选中。
#
# 默认优先级为 100。
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# 设置密码
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
## 最大连接数,一旦达到最大限制,redis 将关闭所有的新连接
# 并发送一个‘max number of clients reached’的错误。
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
# 如果你设置了这个值,当缓存的数据容量达到这个值, redis 将根据你选择的
# eviction 策略来移除一些 keys。
#
# 如果 redis 不能根据策略移除 keys ,或者是策略被设置为 ‘noeviction’,
# redis 将开始响应错误给命令,如 set,lpush 等等,
# 并继续响应只读的命令,如 get
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
## 设置缓存移除策略(上面五个)
# maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
# lua脚本执行时间(毫秒)
lua-time-limit 5000
################################ REDIS CLUSTER(集群) ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
## 启用集群
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
## Redis自动设成的配置文件,保证每个集群不同
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
## 集群节点超时时间(毫秒)
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
% 记录慢日志
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
# 记录占用内存大的日志
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION(事件通知) ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
# 客户端可接受的key通知类型,比如想订阅key的过期通知,就设成Ex,E表示key的事件,x表示key过期事件
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
# 长度低于设置大小会使用紧凑内存,value大小低于设定大小会使用紧凑内存
# 下面list zset设置类似
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

具体修改配置

1
2
3
4
5
6
7
进入到刚配置的目录
cd /usr/local/redis1
目录下不存在redis.conf 文件,去解压目录下的redis-stable下
cp redis.conf /usr/local/redis1/bin
daemonize 修改为yes 后台运行
port 改为7001 方便搭建集群环境
bild 去除

启动

1
./redis-server redis.conf

参考

使用springboot+springsession实现分布式session以及源码解析

1
接上问springboot使用redis

springsession是什么

实现分布式session管理

为什么要使用springsession

spring全家桶,不想自己实现分布式session管理可以使用

添加依赖

1
2
3
4
<dependency>
<groupId>org.springframework.session</groupId>
<artifactId>spring-session</artifactId>
</dependency>

添加配置

1
2
3
4
@EnableRedisHttpSession
public class HttpSessionConfiguration {
}

测试

image
image

原理分析

SpringHttpSession

  • 第一步查看@EnableRedisHttpSession

    1
    2
    3
    4
    5
    6
    7
    8
    @Retention(java.lang.annotation.RetentionPolicy.RUNTIME)
    @Target({ java.lang.annotation.ElementType.TYPE })
    @Documented
    @Import(RedisHttpSessionConfiguration.class)
    @Configuration
    public @interface EnableRedisHttpSession {
    ...
    }
  • 进二步查看RedisHttpSessionConfiguration

    1
    2
    3
    4
    5
    6
    @Configuration
    @EnableScheduling
    public class RedisHttpSessionConfiguration extends SpringHttpSessionConfiguration
    implements EmbeddedValueResolverAware, ImportAware {
    ...
    }
  • 第三步发现其集成自SpringHttpSessionConfiguration查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
public class SpringHttpSessionConfiguration implements ApplicationContextAware {
private CookieHttpSessionStrategy defaultHttpSessionStrategy = new CookieHttpSessionStrategy();
private boolean usesSpringSessionRememberMeServices;
private ServletContext servletContext;
private CookieSerializer cookieSerializer;
private HttpSessionStrategy httpSessionStrategy = this.defaultHttpSessionStrategy;
private List<HttpSessionListener> httpSessionListeners = new ArrayList<HttpSessionListener>();
@PostConstruct
public void init() {
if (this.cookieSerializer != null) {
this.defaultHttpSessionStrategy.setCookieSerializer(this.cookieSerializer);
}
else if (this.usesSpringSessionRememberMeServices) {
DefaultCookieSerializer cookieSerializer = new DefaultCookieSerializer();
cookieSerializer.setRememberMeRequestAttribute(
SpringSessionRememberMeServices.REMEMBER_ME_LOGIN_ATTR);
this.defaultHttpSessionStrategy.setCookieSerializer(cookieSerializer);
}
}
@Bean
public SessionEventHttpSessionListenerAdapter sessionEventHttpSessionListenerAdapter() {
return new SessionEventHttpSessionListenerAdapter(this.httpSessionListeners);
}
@Bean
public <S extends ExpiringSession> SessionRepositoryFilter<? extends ExpiringSession> springSessionRepositoryFilter(
SessionRepository<S> sessionRepository) {
SessionRepositoryFilter<S> sessionRepositoryFilter = new SessionRepositoryFilter<S>(
sessionRepository);
sessionRepositoryFilter.setServletContext(this.servletContext);
if (this.httpSessionStrategy instanceof MultiHttpSessionStrategy) {
sessionRepositoryFilter.setHttpSessionStrategy(
(MultiHttpSessionStrategy) this.httpSessionStrategy);
}
else {
sessionRepositoryFilter.setHttpSessionStrategy(this.httpSessionStrategy);
}
return sessionRepositoryFilter;
}
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
if (ClassUtils.isPresent(
"org.springframework.security.web.authentication.RememberMeServices",
null)) {
this.usesSpringSessionRememberMeServices = !ObjectUtils
.isEmpty(applicationContext
.getBeanNamesForType(SpringSessionRememberMeServices.class));
}
}
@Autowired(required = false)
public void setServletContext(ServletContext servletContext) {
this.servletContext = servletContext;
}
@Autowired(required = false)
public void setCookieSerializer(CookieSerializer cookieSerializer) {
this.cookieSerializer = cookieSerializer;
}
@Autowired(required = false)
public void setHttpSessionStrategy(HttpSessionStrategy httpSessionStrategy) {
this.httpSessionStrategy = httpSessionStrategy;
}
@Autowired(required = false)
public void setHttpSessionListeners(List<HttpSessionListener> listeners) {
this.httpSessionListeners = listeners;
}
}
发现其session默认的策略是使用
defaultHttpSessionStrategy=new CookieHttpSessionStrategy();cookie来实现
继续看
@Bean
public <S extends ExpiringSession> SessionRepositoryFilter<? extends ExpiringSession> springSessionRepositoryFilter(
SessionRepository<S> sessionRepository) {
SessionRepositoryFilter<S> sessionRepositoryFilter = new SessionRepositoryFilter<S>(
sessionRepository);
sessionRepositoryFilter.setServletContext(this.servletContext);
if (this.httpSessionStrategy instanceof MultiHttpSessionStrategy) {
sessionRepositoryFilter.setHttpSessionStrategy(
(MultiHttpSessionStrategy) this.httpSessionStrategy);
}
else {
sessionRepositoryFilter.setHttpSessionStrategy(this.httpSessionStrategy);
}
return sessionRepositoryFilter;
}
传入参数SessionRepository的实现类RedisOperationsSessionRepository在RedisHttpSessionConfiguration被进行创建所以sessionRepository使用的就是RedisOperationsSessionRepository用来做于存储
  • 继续查看SessionRepositoryFilter
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    public class SessionRepositoryFilter<S extends ExpiringSession>
    extends OncePerRequestFilter {
    ...
    }
    继承自OncePerRequestFilter
    abstract class OncePerRequestFilter implements Filter {
    public final void doFilter(ServletRequest request, ServletResponse response,
    FilterChain filterChain) throws ServletException, IOException {
    调用doFilterInternal由SessionRepositoryFilter实现
    @Override
    protected void doFilterInternal(HttpServletRequest request,
    HttpServletResponse response, FilterChain filterChain)
    throws ServletException, IOException {
    request.setAttribute(SESSION_REPOSITORY_ATTR, this.sessionRepository);
    SessionRepositoryRequestWrapper wrappedRequest = new SessionRepositoryRequestWrapper(
    request, response, this.servletContext);
    SessionRepositoryResponseWrapper wrappedResponse = new SessionRepositoryResponseWrapper(
    wrappedRequest, response);
    HttpServletRequest strategyRequest = this.httpSessionStrategy
    .wrapRequest(wrappedRequest, wrappedResponse);
    HttpServletResponse strategyResponse = this.httpSessionStrategy
    .wrapResponse(wrappedRequest, wrappedResponse);
    try {
    filterChain.doFilter(strategyRequest, strategyResponse);
    }
    finally {
    wrappedRequest.commitSession();
    }
    }
    包装请求,响应对象
    根据策略处理包装请求对象
    最后wrappedRequest.commitSession();
    HttpSessionWrapper wrappedSession = getCurrentSession();
    if (wrappedSession == null) {
    if (isInvalidateClientSession()) {
    SessionRepositoryFilter.this.httpSessionStrategy
    .onInvalidateSession(this, this.response);
    }
    }
    else {
    S session = wrappedSession.getSession();
    SessionRepositoryFilter.this.sessionRepository.save(session);
    if (!isRequestedSessionIdValid()
    || !session.getId().equals(getRequestedSessionId())) {
    SessionRepositoryFilter.this.httpSessionStrategy.onNewSession(session,
    this, this.response);
    }
    }
    这就是最终处理,就不做详细解释了

几张图

image

SessionRepository的实现类
image

image

参考

springsession

docker学习笔记

什么是Docker

Docker是使用google公司推出的Go语言进行开发实现,基于Linux内核的cgroup,namespace以及AUFS类的union FS等技术,对进程进行封装隔离属于操作系统层面的虚拟化技术。由于隔离的进程独立于宿主和其它的隔离的进程,因此也称其为容器。

Docker 在容器的基础上,进行了进一步的封装,从文件系统、网络互联到进程隔离等等,极大的简化了容器的创建和维护。使得 Docker 技术比虚拟机技术更为轻便、快捷。

  • Go语言
  • cgroup
  • namespace
  • AUFS
  • Union FS

    为什么要使用Docker

    特点

  • 更高效的利用系统资源
  • 更快速的启动时间
  • 一致的运行环境
  • 持续交付和部署
  • 更轻松的迁移
  • 更轻松的维护和扩展

    对比传统虚拟机技术

特性 容器 虚拟机
启动 秒级 分钟级
硬盘使用 一般为MB 一般为 GB
性能 接近原生 弱于
系统支持量 单机支持上千个容器 一般几十个

传统虚拟机
image
docker
image

docker基本概念

  • 镜像
  • 容器
  • 仓库

参考

Docker — 从入门到实践

springboot打包docker镜像部署

环境准备

机器 vultr一台,centos7

资源下载

  1. jdk8
  2. maven
  3. git yum install git
  4. docker yum install docker-io

环境搭建

jdk,maven
image

1
2
3
4
5
6
7
8
9
10
11
1.解压资源
tar -zxvf jdk8.tar.gz
tar -zxvf apache-maven-3.3.9-bin.tar.gz
2.配置环境变量
vim /etc/profile
export JAVA_HOME=/root/jdk8
export MAVEN_HOME=/root/apache-maven-3.3.9
export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH
3.资源生效
source /etc/profile

项目准备(使用现有项目)

pan-search-springboot
pom.xml新增docker配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.3</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<dockerDirectory>src/main/docker</dockerDirectory>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</plugin>

配置解释

  1. imageName镜像名称
  2. dockerDirectory Dockerfile位置
  3. resources 指那些需要和 Dockerfile 放在一起,在构建镜像时使用的文件,一般应用 jar 包需要纳入。本例,只需一个 jar 文件
    Dockerfile定义 /src/main/docker/Dockerfile
    1
    2
    3
    4
    FROM frolvlad/alpine-oraclejdk8:slim
    VOLUME /tmp
    ADD docker-spring-boot-1.0.0.jar app.jar
    ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

配置解释

  1. VOLUME 指定了临时文件目录为/tmp。其效果是在主机 /var/lib/docker 目录下创建了一个临时文件,并链接到容器的/tmp。改步骤是可选的,如果涉及到文件系统的应用就很有必要了。/tmp目录用来持久化到 Docker 数据文件夹,因为 Spring Boot 使用的内嵌 Tomcat 容器默认使用/tmp作为工作目录
  2. 项目的 jar 文件作为 “app.jar” 添加到容器的
  3. ENTRYPOINT 执行项目 app.jar。为了缩短 Tomcat 启动时间,添加一个系统属性指向 “/dev/urandom” 作为 Entropy Source

构建dockerImage

1
2
项目根路径下执行
mvn package docker:build

运行docker

1
docker run -p 8080:8080 -t kite/pan-search-springboot

打包上传

账号注册

register

登陆

1
docker login

上传

1
docker push kitesweet/pan-search-springboot

拉取镜像

1
docker pull kitesweet/pan-search-springboot

常用docker命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
查看
docker ps
docker ps -a为查看所有的容器,包括已经停止的
删除所有容器
docker rm $(docker ps -a -q)
删除单个容器
docker rm <容器名orID>
停止、启动、杀死一个容器
docker stop <容器名orID>
docker start <容器名orID>
docker kill <容器名orID>
查看所有镜像
docker images
查看容器日志
docker logs -f <容器名orID>

参考

  1. 常用docker命令,及一些坑
  2. 用 Docker 构建、运行、发布一个 Spring Boot 应用

jvm 启动参数配置参考

java -jar xxx.jar -Xms1024m -Xmx1024m -XX:NewSize=512m -XX:MaxNewSize=512m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+UseConcMarkSweepGC -XX:CMSFullGCsBeforeCompaction=5 -XX:+UseCMSCompactAtFullCollection -XX:+CMSParallelRemarkEnabled -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+UseCompressedOops -XX:+DoEscapeAnalysis -XX:MaxTenuringThreshold=10 -verbose:gc -Xloggc:/alidata1/admin/logs/gc.log -XX:+PrintGCDetails

java命令

演示命令

1
2
readme
当前使用 pid=14750

jinfo:

1
2
3
查看Java进程的栈空间大小:jinfo - ThreadStackSize 14750
查看是否使用了压缩指针:jinfo -flag UseCompressedOops 14750
查看系统属性:jinfo -sysprops 14750

jstack:

1
查看一个指定的Java进程中的线程的状态:jstack 14750

jstat:

1
查看gc的信息:jstat -gcutil 14750

jmap&mat

1
2
3
空间中各个年龄段的空间的使用情况:jmap -heap 14750
dump当前java运行状态
jmap -dump:live,format=b,file=/fileName 14750

本文参考链接: http://www.javaranger.com/archives/1063

jvm内存模型

内存模型

方法区(methodArea),java堆(heap),java栈(stack),本地方法栈(native Method Stack)

对象分为:年轻代(Young)、年老代(Tenured)、持久代(Perm)

年轻代(Young):

年轻代分三个区。一个Eden区,两个Survivor区。大部分对象在Eden区中生成。当Eden区满时,还存活的对象将被复制到Survivor区(两个中的一个),当这个Survivor区满时,此区的存活对象将被复制到另外一个Survivor区,当这个Survivor去也满了的时候,从第一个Survivor区复制过来的并且此时还存活的对象,将被复制年老区(Tenured。需要注意,Survivor的两个区是对称的,没先后关系,所以同一个区中可能同时存在从Eden复制过来 对象,和从前一个Survivor复制过来的对象,而复制到年老区的只有从第一个Survivor去过来的对象。而且,Survivor区总有一个是空的。

Tenured(年老代)

年老代存放从年轻代存活的对象。一般来说年老代存放的都是生命期较长的对象。
Perm(持久代) 用于存放静态文件,如今Java类、方法等。持久代对垃圾回收没有显著影响,但是有些应用可能动态生成或者调用一些class,例如Hibernate等,在这种时候需要设置一个比较大的持久代空间来存放这些运行过程中新增的类。持久代大小通过-XX:MaxPermSize=进行设置。

持久代(Perm)

持久代是指MethodArea,不属于Heap。

本文参考链接:http://www.javaranger.com/archives/472