Hystrix超时时间配置

Hystrix配置整体的超时时间

hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=1000

配置参考github

配置单个超时时间

hystrix.command.HystrixCommandKey.execution.isolation.thread.timeoutInMilliseconds=1000

1
2
3
4
5
6
7
8
9
10
11
12
其中HystrixCommandKey为@HystrixCommand 进行配置
example
@HystrixCommand(groupKey = "StoreSubmission", commandKey = "StoreSubmission", threadPoolKey = "StoreSubmission")
public String storeSubmission(ReturnType returnType, InputStream is, String id) {
}
@HystrixCommand需要引入依赖
<dependency>
<groupId>com.netflix.hystrix</groupId>
<artifactId>hystrix-javanica</artifactId>
<version>${hystrix-version}</version>
</dependency>

配置参考stackoverflow

在fegin中配置超时时间

在fegin中无法使用@HystrixCommand来单独进行配置

1
hystrix.command.MyService#getLastTimeData(Map).execution.isolation.thread.timeoutInMilliseconds这种可以的,通过实践验证了 @甲申验证

配置原理

HystrixInvocationHandler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
final class HystrixInvocationHandler implements InvocationHandler {
private final Target<?> target;
private final Map<Method, MethodHandler> dispatch;
private final FallbackFactory<?> fallbackFactory; // Nullable
private final Map<Method, Method> fallbackMethodMap;
private final Map<Method, Setter> setterMethodMap;//时间配置
static Map<Method, Setter> toSetters(SetterFactory setterFactory, Target<?> target,
Set<Method> methods) {
Map<Method, Setter> result = new LinkedHashMap<Method, Setter>();
for (Method method : methods) {
method.setAccessible(true);
result.put(method, setterFactory.create(target, method));
}
return result;
}

SetterFactory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public interface SetterFactory {
/**
* Returns a hystrix setter appropriate for the given target and method
*/
HystrixCommand.Setter create(Target<?> target, Method method);
/**
* Default behavior is to derive the group key from {@link Target#name()} and the command key from
* {@link Feign#configKey(Class, Method)}.
*/
final class Default implements SetterFactory {
@Override
public HystrixCommand.Setter create(Target<?> target, Method method) {
String groupKey = target.name();
String commandKey = Feign.configKey(target.type(), method);
return HystrixCommand.Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
.andCommandKey(HystrixCommandKey.Factory.asKey(commandKey));
}
}
}

fegin

1
2
3
4
5
6
7
8
9
10
11
12
13
public static String configKey(Class targetType, Method method) {
StringBuilder builder = new StringBuilder();
builder.append(targetType.getSimpleName());
builder.append('#').append(method.getName()).append('(');
for (Type param : method.getGenericParameterTypes()) {
param = Types.resolve(targetType, targetType, param);
builder.append(Types.getRawType(param).getSimpleName()).append(',');
}
if (method.getParameterTypes().length > 0) {
builder.deleteCharAt(builder.length() - 1);
}
return builder.append(')').toString();
}

对应上面的规则就可以解释了

1
2
hystrix.command.MyService#getLastTimeData(Map).execution.isolation.thread.timeoutInMilliseconds
其中MyService#getLastTimeData(Map)为commandKey有configKey进行生成

hystrix生效

1
2
3
4
5
6
HystrixInvocationHandler.invoke->
newHystrixCommand->execute->toObservable->
addTimerListener(executionTimeoutInMilliseconds)->
ScheduledThreadPoolExecutor.scheduleAtFixedRate-call-TimerListener.tick->
timeoutRunnable.run->throw HystrixTimeoutException
简单来说就是创建一个ScheduledThreadPoolExecutor,超时时间后执行抛出异常操作,如果超时时间执行完成会吧这个Reference应用干掉

ps

群里看到的问题,闲顺便翻了下源码。有错误请指出,避免误人子弟

参考

mybatis源码解析-启动配置&使用spring启动配置(一)

什么是mybatis

MyBatis 是支持定制化 SQL、存储过程以及高级映射的优秀的持久层框架。MyBatis 避免了几乎所有的 JDBC 代码和手动设置参数以及获取结果集。MyBatis 可以对配置和原生Map使用简单的 XML 或注解,将接口和 Java 的 POJOs(Plain Old Java Objects,普通的 Java对象)映射成数据库中的记录

mybatis用来做什么

针对数据库提供便携的高度灵活定制化sql,且使用简单

版本

基于mybatis-3.4.4版本

mybatis配置

mybatis-config.xml,copy自test下org.apache.ibatis.autoconstructor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE configuration
PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
<!-- autoMappingBehavior should be set in each test case -->
<environments default="development">
<environment id="development">
<transactionManager type="JDBC">
<property name="" value=""/>
</transactionManager>
<dataSource type="UNPOOLED">
<property name="driver" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:mem:automapping"/>
<property name="username" value="sa"/>
</dataSource>
</environment>
</environments>
<mappers>
<mapper resource="org/apache/ibatis/autoconstructor/AutoConstructorMapper.xml"/>
</mappers>
</configuration>

构建SqlSessionFactory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private static SqlSessionFactory sqlSessionFactory;
@BeforeClass
public static void setUp() throws Exception {
// create a SqlSessionFactory
final Reader reader = Resources.getResourceAsReader("org/apache/ibatis/autoconstructor/mybatis-config.xml");
sqlSessionFactory = new SqlSessionFactoryBuilder().build(reader);//构建
reader.close();
// populate in-memory database
final SqlSession session = sqlSessionFactory.openSession();//获取session
final Connection conn = session.getConnection();//打开连接
dbReader.close();
session.close();
}

源码分析

SqlSessionFactoryBuilder

使用构建器模式构建对象

1
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(reader);

SqlSessionFactoryBuilder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class SqlSessionFactoryBuilder {
public SqlSessionFactory build(Reader reader) {
return build(reader, null, null);
}
public SqlSessionFactory build(Reader reader, String environment, Properties properties) {
try {
XMLConfigBuilder parser = new XMLConfigBuilder(reader, environment, properties);
return build(parser.parse());
} catch (Exception e) {
throw ExceptionFactory.wrapException("Error building SqlSession.", e);
} finally {
ErrorContext.instance().reset();
try {
reader.close();
} catch (IOException e) {
// Intentionally ignore. Prefer previous error.
}
}
}
...
public SqlSessionFactory build(Configuration config) {
return new DefaultSqlSessionFactory(config);
}
}

XMLConfigBuilder

基于xpath进行解析

  • propertiesElement
  • typeAliasesElement
  • pluginElement
  • objectFactoryElement
  • objectWrapperFactoryElement
  • reflectorFactoryElement
  • settingsElement
  • environmentsElement
  • databaseIdProviderElement
  • typeHandlerElement
  • mapperElement

image
具体相关配置请参考官方中文文档

1
在解析mapper时,如果为resource节点会创建XMLMapperBuilder继续进行parse

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public class XMLConfigBuilder extends BaseBuilder {
private boolean parsed;
private XPathParser parser;
private String environment;
private ReflectorFactory localReflectorFactory = new DefaultReflectorFactory();
public Configuration parse() {
if (parsed) {
throw new BuilderException("Each XMLConfigBuilder can only be used once.");
}
parsed = true;
parseConfiguration(parser.evalNode("/configuration"));//基于xpath进行解析
return configuration;
}
private void parseConfiguration(XNode root) {
try {
//issue #117 read properties first
propertiesElement(root.evalNode("properties"));
Properties settings = settingsAsProperties(root.evalNode("settings"));
loadCustomVfs(settings);
typeAliasesElement(root.evalNode("typeAliases"));
pluginElement(root.evalNode("plugins"));
objectFactoryElement(root.evalNode("objectFactory"));
objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
reflectorFactoryElement(root.evalNode("reflectorFactory"));
settingsElement(settings);
// read it after objectFactory and objectWrapperFactory issue #631
environmentsElement(root.evalNode("environments"));
databaseIdProviderElement(root.evalNode("databaseIdProvider"));
typeHandlerElement(root.evalNode("typeHandlers"));
mapperElement(root.evalNode("mappers"));
} catch (Exception e) {
throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
}
}
...
}

与spring整合

SqlSessionFactoryBean

1
2
3
4
5
6
<!-- spring和MyBatis,不需要mybatis的配置映射文件 -->
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<!-- 自动扫描mapping.xml文件 -->
<property name="mapperLocations" value="classpath:dal/**/*.xml"></property>
</bean>

SqlSessionFactoryBean 实现了FactoryBean通过getObject->afterPropertiesSet->buildSqlSessionFactory 进行构建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
public class SqlSessionFactoryBean implements FactoryBean<SqlSessionFactory>, InitializingBean, ApplicationListener<ApplicationEvent> {
private static final Log logger = LogFactory.getLog(SqlSessionFactoryBean.class);
private Resource configLocation;
private Resource[] mapperLocations;
private DataSource dataSource;
private TransactionFactory transactionFactory;
private Properties configurationProperties;
private SqlSessionFactoryBuilder sqlSessionFactoryBuilder = new SqlSessionFactoryBuilder();
private SqlSessionFactory sqlSessionFactory;
private String environment = SqlSessionFactoryBean.class.getSimpleName(); // EnvironmentAware requires spring 3.1
private boolean failFast;
private Interceptor[] plugins;
private TypeHandler<?>[] typeHandlers;
private String typeHandlersPackage;
private Class<?>[] typeAliases;
private String typeAliasesPackage;
private Class<?> typeAliasesSuperType;
private DatabaseIdProvider databaseIdProvider; // issue #19. No default provider.
private ObjectFactory objectFactory;
private ObjectWrapperFactory objectWrapperFactory;
/**
* {@inheritDoc}
*/
public void afterPropertiesSet() throws Exception {
notNull(dataSource, "Property 'dataSource' is required");
notNull(sqlSessionFactoryBuilder, "Property 'sqlSessionFactoryBuilder' is required");
this.sqlSessionFactory = buildSqlSessionFactory();
}
/**
* Build a {@code SqlSessionFactory} instance.
*
* The default implementation uses the standard MyBatis {@code XMLConfigBuilder} API to build a
* {@code SqlSessionFactory} instance based on an Reader.
*
* @return SqlSessionFactory
* @throws IOException if loading the config file failed
*/
protected SqlSessionFactory buildSqlSessionFactory() throws IOException {
Configuration configuration;
XMLConfigBuilder xmlConfigBuilder = null;
if (this.configLocation != null) {
xmlConfigBuilder = new XMLConfigBuilder(this.configLocation.getInputStream(), null, this.configurationProperties);
configuration = xmlConfigBuilder.getConfiguration();
} else {
if (logger.isDebugEnabled()) {
logger.debug("Property 'configLocation' not specified, using default MyBatis Configuration");
}
configuration = new Configuration();
configuration.setVariables(this.configurationProperties);
}
if (this.objectFactory != null) {
configuration.setObjectFactory(this.objectFactory);
}
if (this.objectWrapperFactory != null) {
configuration.setObjectWrapperFactory(this.objectWrapperFactory);
}
if (hasLength(this.typeAliasesPackage)) {
String[] typeAliasPackageArray = tokenizeToStringArray(this.typeAliasesPackage,
ConfigurableApplicationContext.CONFIG_LOCATION_DELIMITERS);
for (String packageToScan : typeAliasPackageArray) {
configuration.getTypeAliasRegistry().registerAliases(packageToScan,
typeAliasesSuperType == null ? Object.class : typeAliasesSuperType);
if (logger.isDebugEnabled()) {
logger.debug("Scanned package: '" + packageToScan + "' for aliases");
}
}
}
if (!isEmpty(this.typeAliases)) {
for (Class<?> typeAlias : this.typeAliases) {
configuration.getTypeAliasRegistry().registerAlias(typeAlias);
if (logger.isDebugEnabled()) {
logger.debug("Registered type alias: '" + typeAlias + "'");
}
}
}
if (!isEmpty(this.plugins)) {
for (Interceptor plugin : this.plugins) {
configuration.addInterceptor(plugin);
if (logger.isDebugEnabled()) {
logger.debug("Registered plugin: '" + plugin + "'");
}
}
}
if (hasLength(this.typeHandlersPackage)) {
String[] typeHandlersPackageArray = tokenizeToStringArray(this.typeHandlersPackage,
ConfigurableApplicationContext.CONFIG_LOCATION_DELIMITERS);
for (String packageToScan : typeHandlersPackageArray) {
configuration.getTypeHandlerRegistry().register(packageToScan);
if (logger.isDebugEnabled()) {
logger.debug("Scanned package: '" + packageToScan + "' for type handlers");
}
}
}
if (!isEmpty(this.typeHandlers)) {
for (TypeHandler<?> typeHandler : this.typeHandlers) {
configuration.getTypeHandlerRegistry().register(typeHandler);
if (logger.isDebugEnabled()) {
logger.debug("Registered type handler: '" + typeHandler + "'");
}
}
}
if (xmlConfigBuilder != null) {
try {
xmlConfigBuilder.parse();
if (logger.isDebugEnabled()) {
logger.debug("Parsed configuration file: '" + this.configLocation + "'");
}
} catch (Exception ex) {
throw new NestedIOException("Failed to parse config resource: " + this.configLocation, ex);
} finally {
ErrorContext.instance().reset();
}
}
if (this.transactionFactory == null) {
this.transactionFactory = new SpringManagedTransactionFactory();
}
Environment environment = new Environment(this.environment, this.transactionFactory, this.dataSource);
configuration.setEnvironment(environment);
if (this.databaseIdProvider != null) {
try {
configuration.setDatabaseId(this.databaseIdProvider.getDatabaseId(this.dataSource));
} catch (SQLException e) {
throw new NestedIOException("Failed getting a databaseId", e);
}
}
if (!isEmpty(this.mapperLocations)) {
for (Resource mapperLocation : this.mapperLocations) {
if (mapperLocation == null) {
continue;
}
try {
XMLMapperBuilder xmlMapperBuilder = new XMLMapperBuilder(mapperLocation.getInputStream(),
configuration, mapperLocation.toString(), configuration.getSqlFragments());
xmlMapperBuilder.parse();
} catch (Exception e) {
throw new NestedIOException("Failed to parse mapping resource: '" + mapperLocation + "'", e);
} finally {
ErrorContext.instance().reset();
}
if (logger.isDebugEnabled()) {
logger.debug("Parsed mapper file: '" + mapperLocation + "'");
}
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("Property 'mapperLocations' was not specified or no matching resources found");
}
}
return this.sqlSessionFactoryBuilder.build(configuration);
}
/**
* {@inheritDoc}
*/
public SqlSessionFactory getObject() throws Exception {
if (this.sqlSessionFactory == null) {
afterPropertiesSet();
}
return this.sqlSessionFactory;
}
/**
* {@inheritDoc}
*/
public Class<? extends SqlSessionFactory> getObjectType() {
return this.sqlSessionFactory == null ? SqlSessionFactory.class : this.sqlSessionFactory.getClass();
}
/**
* {@inheritDoc}
*/
public boolean isSingleton() {
return true;
}
}

MapperScannerConfigurer

1
2
3
4
5
<!-- DAO接口所在包名,Spring会自动查找其下的类 -->
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="basePackage" value="com.kite.xxx.mapper" />
<property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"></property>
</bean>

MapperScannerConfigurer:注册到spring容器中,实现BeanDefinitionRegistryPostProcessor接口spring会进行调用注册

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
public class MapperScannerConfigurer implements BeanDefinitionRegistryPostProcessor, InitializingBean, ApplicationContextAware, BeanNameAware {
private String basePackage;
private boolean addToConfig = true;
private SqlSessionFactory sqlSessionFactory;
private SqlSessionTemplate sqlSessionTemplate;
private String sqlSessionFactoryBeanName;
private String sqlSessionTemplateBeanName;
private Class<? extends Annotation> annotationClass;
private Class<?> markerInterface;
private ApplicationContext applicationContext;
private String beanName;
private boolean processPropertyPlaceHolders;
private BeanNameGenerator nameGenerator;
/**
* {@inheritDoc}
*
* @since 1.0.2
*/
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
if (this.processPropertyPlaceHolders) {
processPropertyPlaceHolders();
}
ClassPathMapperScanner scanner = new ClassPathMapperScanner(registry);
scanner.setAddToConfig(this.addToConfig);
scanner.setAnnotationClass(this.annotationClass);
scanner.setMarkerInterface(this.markerInterface);
scanner.setSqlSessionFactory(this.sqlSessionFactory);
scanner.setSqlSessionTemplate(this.sqlSessionTemplate);
scanner.setSqlSessionFactoryBeanName(this.sqlSessionFactoryBeanName);
scanner.setSqlSessionTemplateBeanName(this.sqlSessionTemplateBeanName);
scanner.setResourceLoader(this.applicationContext);
scanner.setBeanNameGenerator(this.nameGenerator);
scanner.registerFilters();
scanner.scan(StringUtils.tokenizeToStringArray(this.basePackage, ConfigurableApplicationContext.CONFIG_LOCATION_DELIMITERS));
}
}

参考

关于h5获取经纬度坑

关于在h5获取经纬度如果在http环境下会出现error,POSITION_UNAVAILABLE地理位置不可用,需要使用https

image

zipkin-brave提供对dubbo监控插件基于springboot(四)

基于dubbo提供zipkin链路跟踪

使用springboot来实现

这里我们可以先查看官方针对其他rpc的实现brave-grpc-3.9.0.jar

原理

针对dubbo调用前后进行拦截,创建span,关联parentSpanId,traceId

其中我们要实现4个接口

  • ClientRequestAdapter
  • ClientResponseAdapter
  • ServerRequestAdapter
  • ServerResponseAdapter

DubboClientRequestAdapter实现ClientRequestAdapter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public class DubboClientRequestAdapter implements ClientRequestAdapter {
private Map<String, String> headers;
private String spanName;
public DubboClientRequestAdapter(@Nullable Map<String, String> headers, @Nullable String spanName) {
this.headers = headers;
this.spanName = spanName;
}
@Override
public String getSpanName() {
return this.spanName;
}
@Override
public void addSpanIdToRequest(SpanId spanId) {
if (spanId == null) {
headers.put(DubboTraceConst.SAMPLED, "0");
} else {
headers.put(DubboTraceConst.SAMPLED, "1");
headers.put(DubboTraceConst.TRACE_ID, IdConversion.convertToString(spanId.traceId));
headers.put(DubboTraceConst.SPAN_ID, IdConversion.convertToString(spanId.spanId));
if (spanId.nullableParentId() != null) {
headers.put(DubboTraceConst.PARENT_SPAN_ID, IdConversion.convertToString(spanId.parentId));
}
}
}
@Override
public Collection<KeyValueAnnotation> requestAnnotations() {
return Collections.emptyList();
}
@Override
public Endpoint serverAddress() {
return null;
}
}

DubboClientResponseAdapter实现ClientResponseAdapter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class DubboClientResponseAdapter implements ClientResponseAdapter {
private StatusEnum status;
public DubboClientResponseAdapter(@Nullable StatusEnum status) {
this.status = status;
}
@Override
public Collection<KeyValueAnnotation> responseAnnotations() {
return Collections.singleton(KeyValueAnnotation.create(DubboTraceConst.STATUS_CODE, status.getDesc()));
}
}

DubboServerRequestAdapter实现ServerRequestAdapter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public class DubboServerRequestAdapter implements ServerRequestAdapter {
private Map<String, String> headers;
private String spanName;
public DubboServerRequestAdapter(@Nullable Map<String, String> headers, @Nullable String spanName) {
this.headers = headers;
this.spanName = spanName;
}
@Override
public TraceData getTraceData() {
final String sampled = headers.get(DubboTraceConst.SAMPLED);
if (sampled != null) {
if (sampled.equals("0") || sampled.toLowerCase().equals("false")) {
return TraceData.builder().sample(false).build();
} else {
final String parentSpanId = headers.get(DubboTraceConst.PARENT_SPAN_ID);
final String traceId = headers.get(DubboTraceConst.TRACE_ID);
final String spanId = headers.get(DubboTraceConst.SPAN_ID);
if (traceId != null && spanId != null) {
SpanId span = getSpanId(traceId, spanId, parentSpanId);
return TraceData.builder().sample(true).spanId(span).build();
}
}
}
return TraceData.builder().build();
}
@Override
public String getSpanName() {
return this.spanName;
}
@Override
public Collection<KeyValueAnnotation> requestAnnotations() {
return Collections.emptyList();
}
static SpanId getSpanId(String traceId, String spanId, String parentSpanId) {
return SpanId.builder().traceId(convertToLong(traceId)).spanId(convertToLong(spanId))
.parentId(parentSpanId == null ? null : convertToLong(parentSpanId)).build();
}
}

DubboServerResponseAdapter实现ServerResponseAdapter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class DubboServerResponseAdapter implements ServerResponseAdapter {
private StatusEnum status;
public DubboServerResponseAdapter(@Nullable StatusEnum status) {
this.status = status;
}
@Override
public Collection<KeyValueAnnotation> responseAnnotations() {
return Collections.singleton(KeyValueAnnotation.create(DubboTraceConst.STATUS_CODE, status.getDesc()));
}
}

dubbo调用拦截

dubbo的调用会执行filterChain,其中区分PROVIDER,CONSUMER 所以可以记录对应的四个时间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
@Activate(group = {Constants.PROVIDER, Constants.CONSUMER})
public class BraveDubboFilter implements Filter {
/**
* @tips:这里不要用注解的方式
*/
private ClientRequestInterceptor clientRequestInterceptor;
private ClientResponseInterceptor clientResponseInterceptor;
private ServerRequestInterceptor serverRequestInterceptor;
private ServerResponseInterceptor serverResponseInterceptor;
public void setClientRequestInterceptor(ClientRequestInterceptor clientRequestInterceptor) {
this.clientRequestInterceptor = clientRequestInterceptor;
}
public BraveDubboFilter setClientResponseInterceptor(ClientResponseInterceptor clientResponseInterceptor) {
this.clientResponseInterceptor = clientResponseInterceptor;
return this;
}
public BraveDubboFilter setServerRequestInterceptor(ServerRequestInterceptor serverRequestInterceptor) {
this.serverRequestInterceptor = serverRequestInterceptor;
return this;
}
public BraveDubboFilter setServerResponseInterceptor(ServerResponseInterceptor serverResponseInterceptor) {
this.serverResponseInterceptor = serverResponseInterceptor;
return this;
}
public Result invoke(Invoker<?> invoker, Invocation invocation) throws RpcException {
/*
* 监控的 dubbo 服务,不纳入跟踪范围
*/
if ("com.alibaba.dubbo.monitor.MonitorService".equals(invoker.getInterface().getName())) {
return invoker.invoke(invocation);
}
RpcContext context = RpcContext.getContext();
/*
* 调用的方法名 以此作为 span name
*/
String methodName = invocation.getMethodName();
/*
* provider 应用相关信息
*/
StatusEnum status = StatusEnum.OK;
if ("0".equals(invocation.getAttachment(DubboTraceConst.SAMPLED))
|| "false".equals(invocation.getAttachment(DubboTraceConst.SAMPLED))) {
return invoker.invoke(invocation);
}
//注入
if(!inject()) {
return invoker.invoke(invocation);
}
if (context.isConsumerSide()) {
System.out.println("consumer execute");
/*
* Client side
*/
clientRequestInterceptor.handle(new DubboClientRequestAdapter(invocation.getAttachments(), methodName));
Result result = null;
try {
result = invoker.invoke(invocation);
} catch (RpcException e) {
status = StatusEnum.ERROR;
throw e;
} finally {
final DubboClientResponseAdapter clientResponseAdapter = new DubboClientResponseAdapter(status);
clientResponseInterceptor.handle(clientResponseAdapter);
}
return result;
} else if (context.isProviderSide()) {
System.out.println("provider execute");
serverRequestInterceptor.handle(new DubboServerRequestAdapter(context.getAttachments(), methodName));
Result result = null;
try {
result = invoker.invoke(invocation);
} finally {
serverResponseInterceptor.handle(new DubboServerResponseAdapter(status));
}
return result;
}
return invoker.invoke(invocation);
}
private boolean inject() {
Brave brave = ApplicationContextHolder.getBean(Brave.class);
if(brave == null) {
return false;
}
this.setClientRequestInterceptor(brave.clientRequestInterceptor());
this.setClientResponseInterceptor(brave.clientResponseInterceptor());
this.setServerRequestInterceptor(brave.serverRequestInterceptor());
this.setServerResponseInterceptor(brave.serverResponseInterceptor());
return true;
}
}

使用springboot configuration

基于注解启用

1
2
3
4
5
6
7
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import(DubboTraceConfiguration.class)
public @interface EnableDubboTrace {
}

配置项

1
2
3
4
5
6
7
8
9
10
@Configuration
@ConditionalOnClass(Brave.class)
public class DubboTraceConfiguration {
@Bean
public ApplicationContextAware holder() {
return new ApplicationContextHolder();
}
}

ApplicationContextHolder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class ApplicationContextHolder implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public void setApplicationContext(ApplicationContext ctx) throws BeansException {
setCtx(ctx);
}
private static void setCtx(ApplicationContext ctx) {
applicationContext = ctx;
}
public static <T> T getBean(Class<T> requiredType){
return applicationContext.getBean(requiredType);
}
public static Object getBean(String classStr) {
return applicationContext.getBean(classStr);
}
}

其他类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
public interface DubboTraceConst {
String SAMPLED = "dubbo.trace.sampled";
String PARENT_SPAN_ID = "dubbo.trace.parentSpanId";
String SPAN_ID = "dubbo.trace.spanId";
String TRACE_ID = "dubbo.trace.traceId";
String STATUS_CODE = "dubbo.trace.staus_code";
}
public enum StatusEnum {
OK(200, "OK"),
ERROR(500, "ERROR");
private int code;
private String desc;
private StatusEnum(int code, String desc) {
this.code = code;
this.desc = desc;
}
public int getCode() {
return code;
}
public String getDesc() {
return desc;
}
}

针对dubbo filter进行配置文件添加

1
2
src/main/resources/META-INF/dubbo/com.alibaba.dubbo.rpc.Filter
BraveDubboFilter=com.kite.zipkin.filter.BraveDubboFilter

如何使用

ps:前置条件是已经有了Brave

导入依赖

1
2
3
4
5
<dependency>
<groupId>com.kite.zipkin</groupId>
<artifactId>dubbo-zipkin-spring-starter</artifactId>
<version>1.0.0</version>
</dependency>

前置条件配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@Configuration
public class ZipkinConfig {
//span(一次请求信息或者一次链路调用)信息收集器
@Bean
public SpanCollector spanCollector() {
Config config = HttpSpanCollector.Config.builder()
.compressionEnabled(false)// 默认false,span在transport之前是否会被gzipped
.connectTimeout(5000)
.flushInterval(1)
.readTimeout(6000)
.build();
return HttpSpanCollector.create("http://localhost:9411", config, new EmptySpanCollectorMetricsHandler());
}
//作为各调用链路,只需要负责将指定格式的数据发送给zipkin
@Bean
public Brave brave(SpanCollector spanCollector){
Builder builder = new Builder("service1");//指定serviceName
builder.spanCollector(spanCollector);
builder.traceSampler(Sampler.create(1));//采集率
return builder.build();
}
}

启动dubboTrace

1
2
3
4
5
6
7
@SpringBootApplication
@EnableDubboTrace
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}

实现效果
image

ps

zipkin-java-brave源码分析(三)

什么是brave

brave是zipkin官方提供的java版本zipkin-client实现

brave提供的功能

基于brave3.9.0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<modules>
<module>brave-core</module>
<module>brave-benchmarks</module>
<module>brave-http</module>
<module>brave-core-spring</module>
<module>brave-resteasy-spring</module>
<module>brave-resteasy3-spring</module>
<module>brave-spancollector-http</module>
<module>brave-spancollector-scribe</module>
<module>brave-spancollector-kafka</module>
<module>brave-spancollector-local</module>
<module>brave-sampler-zookeeper</module>
<module>brave-jersey</module>
<module>brave-jersey2</module>
<module>brave-jaxrs2</module>
<module>brave-grpc</module>
<module>brave-apache-http-interceptors</module>
<module>brave-spring-web-servlet-interceptor</module>
<module>brave-spring-resttemplate-interceptors</module>
<module>brave-mysql</module>
<module>brave-web-servlet-filter</module>
<module>brave-okhttp</module>
</modules>

基于http提供brave源码分析

brave-spancollector-http

image
提供httpCollector收集器

brave-web-servlet-filter

image
基于http请求提供过滤器

brave-apache-http-interceptors

image
基于apache-http-client发起气球提供拦截器

基于springboot启动

step1配置

  • 针对collector的SpanCollector
  • 针对http请求的filter BraveServletFilter
  • 针对数据发送的Brave
  • 针对http-client请求的拦截器BraveHttpRequestInterceptor,BraveHttpResponseInterceptor
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    @Configuration
    public class ZipkinConfig {
    //span(一次请求信息或者一次链路调用)信息收集器
    @Bean
    public SpanCollector spanCollector() {
    Config config = HttpSpanCollector.Config.builder()
    .compressionEnabled(false)// 默认false,span在transport之前是否会被gzipped
    .connectTimeout(5000)
    .flushInterval(1)
    .readTimeout(6000)
    .build();
    return HttpSpanCollector.create("http://localhost:9411", config, new EmptySpanCollectorMetricsHandler());
    }
    //作为各调用链路,只需要负责将指定格式的数据发送给zipkin
    @Bean
    public Brave brave(SpanCollector spanCollector){
    Builder builder = new Builder("service1");//指定serviceName
    builder.spanCollector(spanCollector);
    builder.traceSampler(Sampler.create(1));//采集率
    return builder.build();
    }
    //设置server的(服务端收到请求和服务端完成处理,并将结果发送给客户端)过滤器
    @Bean
    public BraveServletFilter braveServletFilter(Brave brave) {
    BraveServletFilter filter = new BraveServletFilter(brave.serverRequestInterceptor(),
    brave.serverResponseInterceptor(), new DefaultSpanNameProvider());
    return filter;
    }
    //设置client的(发起请求和获取到服务端返回信息)拦截器
    @Bean
    public CloseableHttpClient okHttpClient(Brave brave){
    CloseableHttpClient httpclient = HttpClients.custom()
    .addInterceptorFirst(new BraveHttpRequestInterceptor(brave.clientRequestInterceptor(), new DefaultSpanNameProvider()))
    .addInterceptorFirst(new BraveHttpResponseInterceptor(brave.clientResponseInterceptor()))
    .build();
    return httpclient;
    }
    }

基于http发起请求服务端处理

post or get url : http://localhost/service1

相关代码请查看 zipkin简单介绍及环境搭建(一)

流程图
brave-http-collector-receive
point

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
针对请求,如果Sampledheader包含(X-B3-Sampled)会获取header中的ParentSpanId,TraceId,SpanId直接返回,否者会认为这是一个新的请求会构建Span
HttpServerRequestAdapter.getTraceData()
public TraceData getTraceData() {
final String sampled = serverRequest.getHttpHeaderValue(BraveHttpHeaders.Sampled.getName());
if (sampled != null) {
if (sampled.equals("0") || sampled.toLowerCase().equals("false")) {
return TraceData.builder().sample(false).build();
} else {
final String parentSpanId = serverRequest.getHttpHeaderValue(BraveHttpHeaders.ParentSpanId.getName());
final String traceId = serverRequest.getHttpHeaderValue(BraveHttpHeaders.TraceId.getName());
final String spanId = serverRequest.getHttpHeaderValue(BraveHttpHeaders.SpanId.getName());
if (traceId != null && spanId != null) {
SpanId span = getSpanId(traceId, spanId, parentSpanId);
return TraceData.builder().sample(true).spanId(span).build();
}
}
}
return TraceData.builder().build();
}
针对请求的采样
traceSampler().isSampled(newTraceId),没有使用zk情况下CountingSampler来决定
public synchronized boolean isSampled(long traceIdIgnored) {
boolean result = sampleDecisions.get(i++);
if (i == 100) i = 0;
return result;
}

基于apache-http发起请求

流程图
brave-http-client-send

如何在代码中添加自己的annotation or binaryAnnotation

直接注入Brave即可 ps(不建议这样做,代码侵入。 zipkin不建议添加大量数据)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@RestController
public class ZipkinBraveController {
@Autowired
private CloseableHttpClient httpClient;
@Autowired
private com.github.kristofa.brave.Brave brave;
@GetMapping("/service1")
public String myboot() throws Exception {
brave.serverTracer().submitBinaryAnnotation("状态", "成功");
Thread.sleep(100);//100ms
HttpGet get = new HttpGet("http://localhost:81/test");
CloseableHttpResponse execute = httpClient.execute(get);
/*
* 1、执行execute()的前后,会执行相应的拦截器(cs,cr)
* 2、请求在被调用方执行的前后,也会执行相应的拦截器(sr,ss)
*/
return EntityUtils.toString(execute.getEntity(), "utf-8");
}
}

ps

  • 如果collector要使用kafka直接切换spanController即可,需要server端进行对应配置
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    client端
    KafkaSpanCollector.create(KafkaSpanCollector.Config.builder().kafkaProperties(null).build(), new EmptySpanCollectorMetricsHandler());
    server端需要配置kafka配置
    final class KafkaZooKeeperSetCondition extends SpringBootCondition {
    static final String PROPERTY_NAME = "zipkin.collector.kafka.zookeeper";
    @Override
    public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata a) {
    String kafkaZookeeper = context.getEnvironment().getProperty(PROPERTY_NAME);
    return kafkaZookeeper == null || kafkaZookeeper.isEmpty() ?
    ConditionOutcome.noMatch(PROPERTY_NAME + " isn't set") :
    ConditionOutcome.match();
    }
    }

链接

zipkin-server源码分析(二)

源码地址

zipkin-server配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
zipkin:
self-tracing:
# Set to true to enable self-tracing.
enabled: ${SELF_TRACING_ENABLED:false}
# percentage to self-traces to retain
sample-rate: ${SELF_TRACING_SAMPLE_RATE:1.0}
# Interval in seconds to flush self-tracing data to storage.
flush-interval: ${SELF_TRACING_FLUSH_INTERVAL:1}
collector:
# percentage to traces to retain
sample-rate: ${COLLECTOR_SAMPLE_RATE:1.0}
kafka:
# ZooKeeper host string, comma-separated host:port value.
zookeeper: ${KAFKA_ZOOKEEPER:}
# Name of topic to poll for spans
topic: ${KAFKA_TOPIC:zipkin}
# Consumer group this process is consuming on behalf of.
group-id: ${KAFKA_GROUP_ID:zipkin}
# Count of consumer threads consuming the topic
streams: ${KAFKA_STREAMS:1}
# Maximum size of a message containing spans in bytes
max-message-size: ${KAFKA_MAX_MESSAGE_SIZE:1048576}
scribe:
enabled: ${SCRIBE_ENABLED:false}
category: zipkin
port: ${COLLECTOR_PORT:9410}
query:
# 7 days in millis
lookback: ${QUERY_LOOKBACK:86400000}
# The Cache-Control max-age (seconds) for /api/v1/services and /api/v1/spans
names-max-age: 300
# CORS allowed-origins.
allowed-origins: "*"
storage:
strict-trace-id: ${STRICT_TRACE_ID:true}
type: ${STORAGE_TYPE:mem}
cassandra:
# Comma separated list of hosts / ip addresses part of Cassandra cluster.
contact-points: ${CASSANDRA_CONTACT_POINTS:localhost}
# Name of the datacenter that will be considered "local" for latency load balancing. When unset, load-balancing is round-robin.
local-dc: ${CASSANDRA_LOCAL_DC:}
# Will throw an exception on startup if authentication fails.
username: ${CASSANDRA_USERNAME:}
password: ${CASSANDRA_PASSWORD:}
keyspace: ${CASSANDRA_KEYSPACE:zipkin}
# Max pooled connections per datacenter-local host.
max-connections: ${CASSANDRA_MAX_CONNECTIONS:8}
# Ensuring that schema exists, if enabled tries to execute script /zipkin-cassandra-core/resources/cassandra-schema-cql3.txt.
ensure-schema: ${CASSANDRA_ENSURE_SCHEMA:true}
# 7 days in seconds
span-ttl: ${CASSANDRA_SPAN_TTL:604800}
# 3 days in seconds
index-ttl: ${CASSANDRA_INDEX_TTL:259200}
# the maximum trace index metadata entries to cache
index-cache-max: ${CASSANDRA_INDEX_CACHE_MAX:100000}
# how long to cache index metadata about a trace. 1 minute in seconds
index-cache-ttl: ${CASSANDRA_INDEX_CACHE_TTL:60}
# how many more index rows to fetch than the user-supplied query limit
index-fetch-multiplier: ${CASSANDRA_INDEX_FETCH_MULTIPLIER:3}
# Using ssl for connection, rely on Keystore
use-ssl: ${CASSANDRA_USE_SSL:false}
cassandra3:
# Comma separated list of hosts / ip addresses part of Cassandra cluster.
contact-points: ${CASSANDRA3_CONTACT_POINTS:localhost}
# Name of the datacenter that will be considered "local" for latency load balancing. When unset, load-balancing is round-robin.
local-dc: ${CASSANDRA3_LOCAL_DC:}
# Will throw an exception on startup if authentication fails.
username: ${CASSANDRA3_USERNAME:}
password: ${CASSANDRA3_PASSWORD:}
keyspace: ${CASSANDRA3_KEYSPACE:zipkin3}
# Max pooled connections per datacenter-local host.
max-connections: ${CASSANDRA3_MAX_CONNECTIONS:8}
# Ensuring that schema exists, if enabled tries to execute script /cassandra3-schema.cql
ensure-schema: ${CASSANDRA3_ENSURE_SCHEMA:true}
# how many more index rows to fetch than the user-supplied query limit
index-fetch-multiplier: ${CASSANDRA3_INDEX_FETCH_MULTIPLIER:3}
# Using ssl for connection, rely on Keystore
use-ssl: ${CASSANDRA3_USE_SSL:false}
elasticsearch:
# host is left unset intentionally, to defer the decision
hosts: ${ES_HOSTS:}
pipeline: ${ES_PIPELINE:}
max-requests: ${ES_MAX_REQUESTS:64}
aws:
domain: ${ES_AWS_DOMAIN:}
region: ${ES_AWS_REGION:}
index: ${ES_INDEX:zipkin}
date-separator: ${ES_DATE_SEPARATOR:-}
index-shards: ${ES_INDEX_SHARDS:5}
index-replicas: ${ES_INDEX_REPLICAS:1}
username: ${ES_USERNAME:}
password: ${ES_PASSWORD:}
mysql:
host: ${MYSQL_HOST:localhost}
port: ${MYSQL_TCP_PORT:3306}
username: ${MYSQL_USER:}
password: ${MYSQL_PASS:}
db: ${MYSQL_DB:zipkin}
max-active: ${MYSQL_MAX_CONNECTIONS:10}
use-ssl: ${MYSQL_USE_SSL:false}
ui:
## Values below here are mapped to ZipkinUiProperties, served as /config.json
# Default limit for Find Traces
query-limit: 10
# The value here becomes a label in the top-right corner
environment:
# Default duration to look back when finding traces.
# Affects the "Start time" element in the UI. 1 hour in millis
default-lookback: 3600000
# Which sites this Zipkin UI covers. Regex syntax. (e.g. http:\/\/example.com\/.*)
# Multiple sites can be specified, e.g.
# - .*example1.com
# - .*example2.com
# Default is "match all websites"
instrumented: .*
server:
port: ${QUERY_PORT:9411}
compression:
enabled: true
# compresses any response over min-response-size (default is 2KiB)
# Includes dynamic json content and large static assets from zipkin-ui
mime-types: application/json,application/javascript,text/css,image/svg
spring:
mvc:
favicon:
# zipkin has its own favicon
enabled: false
autoconfigure:
exclude:
# otherwise we might initialize even when not needed (ex when storage type is cassandra)
- org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
info:
zipkin:
version: "@project.version@"
logging:
level:
# Silence Invalid method name: '__can__finagle__trace__v3__'
com.facebook.swift.service.ThriftServiceProcessor: 'OFF'
# # investigate /api/v1/dependencies
# zipkin.internal.DependencyLinker: 'DEBUG'
# # log cassandra queries (DEBUG is without values)
# com.datastax.driver.core.QueryLogger: 'TRACE'
# # log cassandra trace propagation
# com.datastax.driver.core.Message: 'TRACE'

zipkin-server启动

zipkin基于springboot

zipkin-server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@SpringBootApplication
@EnableZipkinServer
public class ZipkinServer {
public static void main(String[] args) {
new SpringApplicationBuilder(ZipkinServer.class)
.listeners(new RegisterZipkinHealthIndicators())
.properties("spring.config.name=zipkin-server").run(args);
}
}
导入
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import({ZipkinServerConfiguration.class, BraveConfiguration.class, ZipkinQueryApiV1.class, ZipkinHttpCollector.class})
public @interface EnableZipkinServer {
}

step1.构建存储

  • StorageComponent
    image
  • SpanStore
    image
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    storage:
    strict-trace-id: ${STRICT_TRACE_ID:true}
    type: ${STORAGE_TYPE:mem}
    配置文件默认使用为mem内存存储
    可以修改 -XX为springboot对应配置
    -XXstorage.type=对应存储结构
    @Configuration
    public class ZipkinServerConfiguration {
    ...
    //对应默认存储配置,只有当zipkin.storage.type=mem才会执行
    @Configuration
    // "matchIfMissing = true" ensures this is used when there's no configured storage type
    @ConditionalOnProperty(name = "zipkin.storage.type", havingValue = "mem", matchIfMissing = true)
    @ConditionalOnMissingBean(StorageComponent.class)
    static class InMemoryConfiguration {
    @Bean StorageComponent storage(@Value("${zipkin.storage.strict-trace-id:true}") boolean strictTraceId) {
    return InMemoryStorage.builder().strictTraceId(strictTraceId).build();
    }
    }
    }

提供api

生成trace

rest入口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
@RestController
@CrossOrigin("${zipkin.query.allowed-origins:*}")
public class ZipkinHttpCollector {
static final ResponseEntity<?> SUCCESS = ResponseEntity.accepted().build();
static final String APPLICATION_THRIFT = "application/x-thrift";
final CollectorMetrics metrics;
final Collector collector;
@Autowired ZipkinHttpCollector(StorageComponent storage, CollectorSampler sampler,
CollectorMetrics metrics) {
this.metrics = metrics.forTransport("http");
this.collector = Collector.builder(getClass())
.storage(storage).sampler(sampler).metrics(this.metrics).build();
}
//入口
@RequestMapping(value = "/api/v1/spans", method = POST)
public ListenableFuture<ResponseEntity<?>> uploadSpansJson(
@RequestHeader(value = "Content-Encoding", required = false) String encoding,
@RequestBody byte[] body
) {
return validateAndStoreSpans(encoding, Codec.JSON, body);
}
@RequestMapping(value = "/api/v1/spans", method = POST, consumes = APPLICATION_THRIFT)
public ListenableFuture<ResponseEntity<?>> uploadSpansThrift(
@RequestHeader(value = "Content-Encoding", required = false) String encoding,
@RequestBody byte[] body
) {
return validateAndStoreSpans(encoding, Codec.THRIFT, body);
}
ListenableFuture<ResponseEntity<?>> validateAndStoreSpans(String encoding, Codec codec,
byte[] body) {
SettableListenableFuture<ResponseEntity<?>> result = new SettableListenableFuture<>();
metrics.incrementMessages();
if (encoding != null && encoding.contains("gzip")) {
try {
body = gunzip(body);
} catch (IOException e) {
metrics.incrementMessagesDropped();
result.set(ResponseEntity.badRequest().body("Cannot gunzip spans: " + e.getMessage() + "\n"));
}
}
//接收span
collector.acceptSpans(body, codec, new Callback<Void>() {
@Override public void onSuccess(@Nullable Void value) {
result.set(SUCCESS);
}
@Override public void onError(Throwable t) {
String message = t.getMessage() == null ? t.getClass().getSimpleName() : t.getMessage();
result.set(t.getMessage() == null || message.startsWith("Cannot store")
? ResponseEntity.status(500).body(message + "\n")
: ResponseEntity.status(400).body(message + "\n"));
}
});
return result;
}
//略
}

collector处理器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
public final class Collector {
/** Needed to scope this to the correct logging category */
public static Builder builder(Class<?> loggingClass) {
return new Builder(Logger.getLogger(checkNotNull(loggingClass, "loggingClass").getName()));
}
public static final class Builder {
final Logger logger;
StorageComponent storage = null;
CollectorSampler sampler = CollectorSampler.ALWAYS_SAMPLE;
CollectorMetrics metrics = CollectorMetrics.NOOP_METRICS;
...
public Collector build() {
return new Collector(this);
}
}
final Logger logger;
final StorageComponent storage;
final CollectorSampler sampler;
final CollectorMetrics metrics;
Collector(Builder builder) {
this.logger = checkNotNull(builder.logger, "logger");
this.storage = checkNotNull(builder.storage, "storage");
this.sampler = builder.sampler == null ? CollectorSampler.ALWAYS_SAMPLE : builder.sampler;
this.metrics = builder.metrics == null ? CollectorMetrics.NOOP_METRICS : builder.metrics;
}
public void acceptSpans(byte[] serializedSpans, Codec codec, Callback<Void> callback) {
metrics.incrementBytes(serializedSpans.length);//记录指标
List<Span> spans;
try {
spans = codec.readSpans(serializedSpans);//字节数组转换成对象
} catch (RuntimeException e) {
callback.onError(errorReading(e));
return;
}
accept(spans, callback);//处理span
}
...
public void accept(List<Span> spans, Callback<Void> callback) {
if (spans.isEmpty()) {
callback.onSuccess(null);
return;
}
metrics.incrementSpans(spans.size());
List<Span> sampled = sample(spans);
if (sampled.isEmpty()) {
callback.onSuccess(null);
return;
}
try {
storage.asyncSpanConsumer().accept(sampled, acceptSpansCallback(sampled));//处理
callback.onSuccess(null);
} catch (RuntimeException e) {
callback.onError(errorStoringSpans(sampled, e));
return;
}
}
//取样
List<Span> sample(List<Span> input) {
List<Span> sampled = new ArrayList<>(input.size());
for (Span s : input) {
if (sampler.isSampled(s)) sampled.add(s);
}
int dropped = input.size() - sampled.size();
if (dropped > 0) metrics.incrementSpansDropped(dropped);
return sampled;
}
...
}

InMemorySpanStore最终处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
/** Internally, spans are indexed on 64-bit trace ID */
public final class InMemorySpanStore implements SpanStore {
private final Multimap<Long, Span> traceIdToSpans = new LinkedListMultimap<>();//traceId+span
private final Set<Pair<Long>> traceIdTimeStamps = new TreeSet<>(VALUE_2_DESCENDING);//traceId+timestap
private final Multimap<String, Pair<Long>> serviceToTraceIdTimeStamp =
new SortedByValue2Descending<>();
private final Multimap<String, String> serviceToSpanNames =
new LinkedHashSetMultimap<>();//serviceName+spanName
private final boolean strictTraceId;
volatile int acceptedSpanCount;
// Historical constructor
public InMemorySpanStore() {
this(new InMemoryStorage.Builder());
}
InMemorySpanStore(InMemoryStorage.Builder builder) {
this.strictTraceId = builder.strictTraceId;
}
final StorageAdapters.SpanConsumer spanConsumer = new StorageAdapters.SpanConsumer() {
@Override public void accept(List<Span> spans) {
for (Span span : spans) {
Long timestamp = guessTimestamp(span);
Pair<Long> traceIdTimeStamp =
Pair.create(span.traceId, timestamp == null ? Long.MIN_VALUE : timestamp);
String spanName = span.name;
synchronized (InMemorySpanStore.this) {
traceIdTimeStamps.add(traceIdTimeStamp);
traceIdToSpans.put(span.traceId, span);
acceptedSpanCount++;
for (String serviceName : span.serviceNames()) {
serviceToTraceIdTimeStamp.put(serviceName, traceIdTimeStamp);
serviceToSpanNames.put(serviceName, spanName);
}
}
}
}
@Override public String toString() {
return "InMemorySpanConsumer";
}
};
...
}

查询trace

提供api对应查询为配置的storeage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
@RestController
@RequestMapping("/api/v1")
@CrossOrigin("${zipkin.query.allowed-origins:*}")
public class ZipkinQueryApiV1 {
@Autowired
@Value("${zipkin.query.lookback:86400000}")
int defaultLookback = 86400000; // 1 day in millis
/** The Cache-Control max-age (seconds) for /api/v1/services and /api/v1/spans */
@Value("${zipkin.query.names-max-age:300}")
int namesMaxAge = 300; // 5 minutes
volatile int serviceCount; // used as a threshold to start returning cache-control headers
private final StorageComponent storage;
@Autowired
public ZipkinQueryApiV1(StorageComponent storage) {
this.storage = storage; // don't cache spanStore here as it can cause the app to crash!
}
@RequestMapping(value = "/dependencies", method = RequestMethod.GET, produces = APPLICATION_JSON_VALUE)
public byte[] getDependencies(@RequestParam(value = "endTs", required = true) long endTs,
@RequestParam(value = "lookback", required = false) Long lookback) {
return Codec.JSON.writeDependencyLinks(storage.spanStore().getDependencies(endTs, lookback != null ? lookback : defaultLookback));
}
@RequestMapping(value = "/services", method = RequestMethod.GET)
public ResponseEntity<List<String>> getServiceNames() {
List<String> serviceNames = storage.spanStore().getServiceNames();
serviceCount = serviceNames.size();
return maybeCacheNames(serviceNames);
}
@RequestMapping(value = "/spans", method = RequestMethod.GET)
public ResponseEntity<List<String>> getSpanNames(
@RequestParam(value = "serviceName", required = true) String serviceName) {
return maybeCacheNames(storage.spanStore().getSpanNames(serviceName));
}
@RequestMapping(value = "/traces", method = RequestMethod.GET, produces = APPLICATION_JSON_VALUE)
public String getTraces(
@RequestParam(value = "serviceName", required = false) String serviceName,
@RequestParam(value = "spanName", defaultValue = "all") String spanName,
@RequestParam(value = "annotationQuery", required = false) String annotationQuery,
@RequestParam(value = "minDuration", required = false) Long minDuration,
@RequestParam(value = "maxDuration", required = false) Long maxDuration,
@RequestParam(value = "endTs", required = false) Long endTs,
@RequestParam(value = "lookback", required = false) Long lookback,
@RequestParam(value = "limit", required = false) Integer limit) {
QueryRequest queryRequest = QueryRequest.builder()
.serviceName(serviceName)
.spanName(spanName)
.parseAnnotationQuery(annotationQuery)
.minDuration(minDuration)
.maxDuration(maxDuration)
.endTs(endTs)
.lookback(lookback != null ? lookback : defaultLookback)
.limit(limit).build();
return new String(Codec.JSON.writeTraces(storage.spanStore().getTraces(queryRequest)), UTF_8);
}
@RequestMapping(value = "/trace/{traceIdHex}", method = RequestMethod.GET, produces = APPLICATION_JSON_VALUE)
public String getTrace(@PathVariable String traceIdHex, WebRequest request) {
long traceIdHigh = traceIdHex.length() == 32 ? lowerHexToUnsignedLong(traceIdHex, 0) : 0L;
long traceIdLow = lowerHexToUnsignedLong(traceIdHex);
String[] raw = request.getParameterValues("raw"); // RequestParam doesn't work for param w/o value
List<Span> trace = raw != null
? storage.spanStore().getRawTrace(traceIdHigh, traceIdLow)
: storage.spanStore().getTrace(traceIdHigh, traceIdLow);
if (trace == null) {
throw new TraceNotFoundException(traceIdHex, traceIdHigh, traceIdLow);
}
return new String(Codec.JSON.writeSpans(trace), UTF_8);
}
@ExceptionHandler(TraceNotFoundException.class)
@ResponseStatus(HttpStatus.NOT_FOUND)
public void notFound() {
}
static class TraceNotFoundException extends RuntimeException {
public TraceNotFoundException(String traceIdHex, Long traceIdHigh, long traceId) {
super(String.format("Cannot find trace for id=%s, parsed value=%s", traceIdHex,
traceIdHigh != null ? traceIdHigh + "," + traceId : traceId));
}
}
/**
* We cache names if there are more than 3 services. This helps people getting started: if we
* cache empty results, users have more questions. We assume caching becomes a concern when zipkin
* is in active use, and active use usually implies more than 3 services.
*/
ResponseEntity<List<String>> maybeCacheNames(List<String> names) {
ResponseEntity.BodyBuilder response = ResponseEntity.ok();
if (serviceCount > 3) {
response.cacheControl(CacheControl.maxAge(namesMaxAge, TimeUnit.SECONDS).mustRevalidate());
}
return response.body(names);
}
}

PS

  • 如果更改了存储的类型,默认会进行直接切换,比如storage.type=elasticsearch,基于springboot的autoconfigure原则,ZipkinElasticsearchHttpStorageAutoConfiguration会执行,同时条件成立会直接创建elasticsearchStoreage
  • 当前只查看了inMemory的流程如果有兴趣其他流程可以自己去看

贴下流程图

  • zipkin-server接收插入请求-inMemory
    zipkin-server -inmemory

  • zipkin-server接收查询请求-inMemory
    zipkin-server -inmemory

  • 项目源码
    image

参考

zipkin教程简单介绍及环境搭建(一)

什么是zipkin

Zipkin 是 Twitter 的一个开源项目,允许开发者收集 Twitter 各个服务上的监控数据,并提供查询接口

为什么要使用zipkin

随着业务发展,系统拆分导致系统调用链路愈发复杂一个前端请求可能最终需要调用很多次后端服务才能完成,当整个请求变慢或不可用时,我们是无法得知该请求是由某个或某些后端服务引起的,这时就需要解决如何快读定位服务故障点,以对症下药。于是就有了分布式系统调用跟踪的诞生。而zipkin就是开源分布式系统调用跟踪的佼佼者

zipkin基于google-Dapper的论文有兴趣的可以看下

google-Dapper

zipkin体系介绍

zipkin架构

image
包含组件

  • collector 收集器
  • storage 存储
  • api 查询api
  • ui 界面

zipkin存储

zipkin存储默认使用inMemory

支持存储模式

  • inMemory
  • mysql
  • Cassandra
  • Elasticsearch

ZipKin数据模型

  • Trace:一组代表一次用户请求所包含的spans,其中根span只有一个。
  • Span: 一组代表一次HTTP/RPC请求所包含的annotations。
  • annotation:包括一个值,时间戳,主机名(留痕迹)。

几个时间

  • cs:客户端发起请求,标志Span的开始
  • sr:服务端接收到请求,并开始处理内部事务,其中sr - cs则为网络延迟和时钟抖动
  • ss:服务端处理完请求,返回响应内容,其中ss - sr则为服务端处理请求耗时
  • cr:客户端接收到服务端响应内容,标志着Span的结束,其中cr - ss则为网络延迟和时钟抖动

搭建zipkin

下载文件

启动zipkin

java -jar zipkin-server-1.22.1-exec.jar

使用elasticsearch-5.3.0作为存储启动zipkin

链接

  • elasticsearch-5.3.0下载
  • github
    解压elasticsearch-5.3.0运行
    image
    启动完成界面
    image
    启动zipkin使用elasticsearch
    1
    2
    3
    java -jar zipkin-server-1.22.1-exec.jar --STORAGE_TYPE=elasticsearch --DES_HOSTS=http://localhost:9200
    zipkin-server-1.22.1-exec.jar采用springboot编写,springboot传入参数使用--key=value.
    当前为什么使用--STORAGE_TYPE,--DES_HOSTS由配置文件里面决定

zipkin 控制台
image
zipkin 明细
image
zipkin 依赖

springboot+apache-httpclient使用zipkin

项目结构

image

pom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.kite.zipkin</groupId>
<artifactId>zipkin-demo-server</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>zipkin-demo-server</name>
<url>http://maven.apache.org</url>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.2.RELEASE</version>
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- brave core -->
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-core</artifactId>
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-spancollector-http</artifactId>
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-web-servlet-filter</artifactId>
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-apache-http-interceptors</artifactId>
<version>3.9.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>

ZipkinConfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
package com.kite.zipkin.config;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.github.kristofa.brave.Brave;
import com.github.kristofa.brave.Brave.Builder;
import com.github.kristofa.brave.EmptySpanCollectorMetricsHandler;
import com.github.kristofa.brave.Sampler;
import com.github.kristofa.brave.SpanCollector;
import com.github.kristofa.brave.http.DefaultSpanNameProvider;
import com.github.kristofa.brave.http.HttpSpanCollector;
import com.github.kristofa.brave.http.HttpSpanCollector.Config;
import com.github.kristofa.brave.httpclient.BraveHttpRequestInterceptor;
import com.github.kristofa.brave.httpclient.BraveHttpResponseInterceptor;
import com.github.kristofa.brave.servlet.BraveServletFilter;
@Configuration
public class ZipkinConfig {
//span(一次请求信息或者一次链路调用)信息收集器
@Bean
public SpanCollector spanCollector() {
Config config = HttpSpanCollector.Config.builder()
.compressionEnabled(false)// 默认false,span在transport之前是否会被gzipped
.connectTimeout(5000)
.flushInterval(1)
.readTimeout(6000)
.build();
return HttpSpanCollector.create("http://localhost:9411", config, new EmptySpanCollectorMetricsHandler());
}
//作为各调用链路,只需要负责将指定格式的数据发送给zipkin
@Bean
public Brave brave(SpanCollector spanCollector){
Builder builder = new Builder("service1");//指定serviceName
builder.spanCollector(spanCollector);
builder.traceSampler(Sampler.create(1));//采集率
return builder.build();
}
//设置server的(服务端收到请求和服务端完成处理,并将结果发送给客户端)过滤器
@Bean
public BraveServletFilter braveServletFilter(Brave brave) {
BraveServletFilter filter = new BraveServletFilter(brave.serverRequestInterceptor(),
brave.serverResponseInterceptor(), new DefaultSpanNameProvider());
return filter;
}
//设置client的(发起请求和获取到服务端返回信息)拦截器
@Bean
public CloseableHttpClient okHttpClient(Brave brave){
CloseableHttpClient httpclient = HttpClients.custom()
.addInterceptorFirst(new BraveHttpRequestInterceptor(brave.clientRequestInterceptor(), new DefaultSpanNameProvider()))
.addInterceptorFirst(new BraveHttpResponseInterceptor(brave.clientResponseInterceptor()))
.build();
return httpclient;
}
}

ZipkinBraveController

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package com.kite.zipkin.controller;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.util.EntityUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class ZipkinBraveController {
@Autowired
private CloseableHttpClient okHttpClient;
@GetMapping("/service1")
public String myboot() throws Exception {
Thread.sleep(100);//100ms
HttpGet get = new HttpGet("http://localhost:81/test");
CloseableHttpResponse execute = okHttpClient.execute(get);
/*
* 1、执行execute()的前后,会执行相应的拦截器(cs,cr)
* 2、请求在被调用方执行的前后,也会执行相应的拦截器(sr,ss)
*/
return EntityUtils.toString(execute.getEntity(), "utf-8");
}
}

Application启动类

1
2
3
4
5
6
7
8
9
10
11
package com.kite.zipkin;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}

======================

zipkin-demo-server-2 serviceName修改

1
2
3
4
5
6
public Brave brave(SpanCollector spanCollector){
Builder builder = new Builder("service2");//指定serviceName
builder.spanCollector(spanCollector);
builder.traceSampler(Sampler.create(1));//采集率
return builder.build();
}

controller修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
package com.kite.zipkin.controller;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.util.EntityUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class ZipkinBraveController {
@Autowired
private CloseableHttpClient httpClient;
@GetMapping("/test")
public String myboot() throws Exception {
Thread.sleep(200);//100ms
HttpGet get1 = new HttpGet("http://localhost:82/test");
CloseableHttpResponse execute1 = httpClient.execute(get1);
/*
* 1、执行execute()的前后,会执行相应的拦截器(cs,cr)
* 2、请求在被调用方执行的前后,也会执行相应的拦截器(sr,ss)
*/
HttpGet get2 = new HttpGet("http://localhost:83/test");
CloseableHttpResponse execute2 = httpClient.execute(get2);
return EntityUtils.toString(execute1.getEntity(), "utf-8") + "-" +EntityUtils.toString(execute2.getEntity(), "utf-8");
}
}

zipkin-demo-server-3 serviceName修改

1
2
3
4
5
6
7
@Bean
public Brave brave(SpanCollector spanCollector){
Builder builder = new Builder("service3");//指定serviceName
builder.spanCollector(spanCollector);
builder.traceSampler(Sampler.create(1));//采集率
return builder.build();
}

controller修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package com.kite.zipkin.controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class ZipkinBraveController {
@GetMapping("/test")
public String myboot() throws Exception {
Thread.sleep(100);//100ms
return "service3";
}
}

zipkin-demo-server-4 serviceName修改

1
2
3
4
5
6
7
@Bean
public Brave brave(SpanCollector spanCollector){
Builder builder = new Builder("service4");//指定serviceName
builder.spanCollector(spanCollector);
builder.traceSampler(Sampler.create(1));//采集率
return builder.build();
}

controller修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package com.kite.zipkin.controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class ZipkinBraveController {
@GetMapping("/test")
public String myboot() throws Exception {
Thread.sleep(100);//100ms
return "service3";
}
}

关注点point

通过google maps获取国家

需求

根据国家过去相关报警电话, 页面为h5

解决方案

最开始直接想到的是通过百度地图api来进行,实现不过查看官方文档来看,满足不了要求,不能获取到国家级别的数据

百度地图api

官方 JavaScript api

demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="initial-scale=1.0, user-scalable=no" />
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Hello, World</title>
<style type="text/css">
html {
height: 100%
}
body {
height: 100%;
margin: 0px;
padding: 0px
}
#container {
height: 100%
}
</style>
<script type="text/javascript" src="http://api.map.baidu.com/api?v=2.0&ak=8a9c8c9b61196a1b5be23217fc94a489">
//v2.0版本的引用方式:src="http://api.map.baidu.com/api?v=2.0&ak=您的密钥"
//v1.4版本及以前版本的引用方式:src="http://api.map.baidu.com/api?v=1.4&key=您的密钥&callback=initialize"
</script>
</head>
<body>
<div id="container"></div>
<script type="text/javascript">
//var point = new BMap.Point(116.404, 39.915); // 创建点坐标
//map.centerAndZoom(point, 15); // 初始化地图,设置中心点坐标和地图级别
var geolocation = new BMap.Geolocation();
var boundary = new BMap.Boundary();
boundary.get("上海", function(data) {
console.info(data);
})
geolocation.getCurrentPosition(function (r) {
if (this.getStatus() == BMAP_STATUS_SUCCESS) {
console.info('您的位置:' + r.point.lng + ',' + r.point.lat);
// 创建地理编码实例
var myGeo = new BMap.Geocoder();
myGeo.getLocation(new BMap.Point(r.point.lng, r.point.lat), function (result) {
console.info(result);
if (result) {
console.info(result.address);
}
});
}
else {
alert('failed' + this.getStatus());
}
}, { enableHighAccuracy: true })
</script>
</body>
</html>

image
最高只能获取到province省份,不能获取到国家

高德地图api

官方JavaScript api
同样不能拿到 国家,demo 略

h5api&google maps api

HTML5 Geolocation(地理定位)用于定位用户的位置。是html5的主要特性之一

demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>test</title>
</head>
<script type="text/javascript" src="jquery-1.10.1.min.js"></script>
<body>
<script>
if (navigator.geolocation) {
// getCurrentPosition支持三个参数
// getSuccess是执行成功的回调函数
// getError是失败的回调函数
// getOptions是一个对象,用于设置getCurrentPosition的参数
// 后两个不是必要参数
var getOptions = {
//是否使用高精度设备,如GPS。默认是true
enableHighAccuracy : true,
//超时时间,单位毫秒,默认为0
timeout : 5000,
//使用设置时间内的缓存数据,单位毫秒
//默认为0,即始终请求新数据
//如设为Infinity,则始终使用缓存数据
maximumAge : 0
};
//成功回调
function getSuccess(position) {
alert(position);
// getCurrentPosition执行成功后,会把getSuccess传一个position对象
// position有两个属性,coords和timeStamp
// timeStamp表示地理数据创建的时间??????
// coords是一个对象,包含了地理位置数据
console.info(position.timeStamp);
// 估算的纬度
console.info(position.coords.latitude);
// 估算的经度
console.info(position.coords.longitude);
alert("当前位置:" + position.coords.latitude + ","
+ position.coords.longitude);
getCounty({'latitude':position.coords.latitude, 'longitude':position.coords.longitude});
// 估算的高度 (以米为单位的海拔值)
console.info(position.coords.altitude);
// 所得经度和纬度的估算精度,以米为单位
console.info(position.coords.accuracy);
// 所得高度的估算精度,以米为单位
console.info(position.coords.altitudeAccuracy);
// 宿主设备的当前移动方向,以度为单位,相对于正北方向顺时针方向计算
console.info(position.coords.heading);
// 设备的当前对地速度,以米/秒为单位
console.info(position.coords.speed);
// 除上述结果外,Firefox还提供了另外一个属性address
if (position.address) {
//通过address,可以获得国家、省份、城市
console.info(position.address.country);
console.info(position.address.province);
console.info(position.address.city);
}
}
//获取国家 根据经纬度获取国家
function getCounty(data) {
var url = "https://maps.googleapis.com/maps/api/geocode/json?latlng=" + data.latitude + "," + data.longitude + "&sensor=false&language=CN";
$.post(url, function(data){
console.info(data);
if(data.status == 'OK') {
var results = data.results;
for (var i=0; i < results[0].address_components.length; i++) {
for (var j=0; j < results[0].address_components[i].types.length; j++) {
if (results[0].address_components[i].types[j] == "country") {
country = results[0].address_components[i];
console.log(country.long_name)
alert(country.long_name)
console.log(country.short_name)
alert(country.short_name)
}
}
}
} else {
alert('定位失败')
}
});
}
//失败回调
function getError(error) {
// 执行失败的回调函数,会接受一个error对象作为参数
// error拥有一个code属性和三个常量属性TIMEOUT、PERMISSION_DENIED、POSITION_UNAVAILABLE
// 执行失败时,code属性会指向三个常量中的一个,从而指明错误原因
alert(error);
switch (error.code) {
case error.TIMEOUT:
alert("超时")
console.info('超时');
break;
case error.PERMISSION_DENIED:
alert("用户拒绝提供地理位置")
console.info('用户拒绝提供地理位置');
break;
case error.POSITION_UNAVAILABLE:
alert("地理位置不可用");
console.info('地理位置不可用');
break;
default:
break;
}
}
navigator.geolocation.getCurrentPosition(getSuccess, getError,
getOptions);
// watchPosition方法一样可以设置三个参数
// 使用方法和getCurrentPosition方法一致,只是执行效果不同。
// getCurrentPosition只执行一次
// watchPosition只要设备位置发生变化,就会执行
var watcher_id = navigator.geolocation.watchPosition(getSuccess,
getError, getOptions);
//clearwatch用于终止watchPosition方法
navigator.geolocation.clearWatch(watcher_id);
}
</script>
</body>
</html>

  • 通过navigator.geolocation获取到坐标位置,然后通过google maps api可以获取到国家(无需google map api key) ps:国内需要翻墙
  • 需要开启手机定位,如果不开启定位要获取可以使用ip段or百度api定位(百度获取到的经纬度不能直接使用需要进行转换,原因为坐标系统不一致)

PS

  • 百度&高德没看到支持,希望提供支持(可能没看到,如果有人懂的可以联系下)
  • 代码略挫,因为前端是直接用vue写的,就不需要我来封装了,省事

参考

spring-loaded热部署

什么是spring-loaded?

spring-loaded是一个对于jvm代理运行时期改变类文件的重载(重新加载),它转换类loadtime让他们服从后重新加载。不像“热代码替换”只允许一次简单的改变JVM运行(例如更改方法体)spring-loaded允许您添加/修改/删除/字段/方法构造函数。注释类型/方法/字段/构造函数也可以修改和可以添加/删除/修改值的枚举类型。

有什么好处?

  1. 开发测试阶段:能够在启动后动态更改代码调试,无需重启减少切换debug时间(ps:对于eclipse而言,在debug时期只能做到动态更新方法体不能增加)
  2. 对于线上测试发布阶段: 能够在出现问题后直接替换class文件而不重启应用(ps:对于外部提供的服务jar形式同样能做到)

怎么使用?

项目地址

1
https://github.com/spring-projects/spring-loaded

第一步:下载文件

1
http://repo.spring.io/release/org/springframework/springloaded/1.2.5.RELEASE/springloaded-1.2.5.RELEASE.jar

第二步:配置jvm启动参数

eclipse

1
2
3
4
5
6
7
eclipse:run as --> run configurations --> arguments -->> VM arguments
-javaagent:E:\repository\org\springframework\spring-load\springloaded-1.2.5.RELEASE.jar
-noverify -Dspringloaded=verbose
详细描述:
-javaagent: 配置java代理使用下载后的jar包路径
-noverify: 禁用字节码验证
-Dspringloaded=verbose 显示springloaded时的详细信息

image

java命令启动

1
2
java -javaagent:E:\repository\org\springframework\spring-load\springloaded-1.2.5.RELEASE.jar -noverify Test
类似

java jar包动态替换

1.打成runnable Jar
2.命令启动:

java -javaagent:E:\repository\org\springframework\spring-load\springloaded-1.2.5.RELEASE.jar -noverify -Dspringloaded=watchJars=main.jar main.jar

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/**
* 类Test.java的实现描述:TODO 类实现描述
* @author Administrator 2016年7月4日 下午4:55:59
*/
public class Test {
public static void main(String[] args) throws InterruptedException {
while(true) {
try {
println();
Thread.sleep(1000);
} catch (Throwable e) {
e.printStackTrace();
}
}
}
public static void println() {
System.out.println("112222221222222");
}
}

改变为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/**
* 类Test.java的实现描述:TODO 类实现描述
* @author Administrator 2016年7月4日 下午4:55:59
*/
public class Test {
public static void main(String[] args) throws InterruptedException {
while(true) {
try {
println();
Thread.sleep(1000);
} catch (Throwable e) {
e.printStackTrace();
}
}
}
public static void println() {
System.out.println("test replace jar");
}
}
3.重新打包替换
1
2
PS:实测在window下无用
手上无linux机器待测试