经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » Redis » 查看文章
Redis中LRU淘汰策略的深入分析
来源:jb51  时间:2019/6/3 8:38:22  对本文有异议

前言

Redis作为缓存使用时,一些场景下要考虑内存的空间消耗问题。Redis会删除过期键以释放空间,过期键的删除策略有两种:

  • 惰性删除:每次从键空间中获取键时,都检查取得的键是否过期,如果过期的话,就删除该键;如果没有过期,就返回该键。
  • 定期删除:每隔一段时间,程序就对数据库进行一次检查,删除里面的过期键。

另外,Redis也可以开启LRU功能来自动淘汰一些键值对。

LRU算法

当需要从缓存中淘汰数据时,我们希望能淘汰那些将来不可能再被使用的数据,保留那些将来还会频繁访问的数据,但最大的问题是缓存并不能预言未来。一个解决方法就是通过LRU进行预测:最近被频繁访问的数据将来被访问的可能性也越大。缓存中的数据一般会有这样的访问分布:一部分数据拥有绝大部分的访问量。当访问模式很少改变时,可以记录每个数据的最后一次访问时间,拥有最少空闲时间的数据可以被认为将来最有可能被访问到。

举例如下的访问模式,A每5s访问一次,B每2s访问一次,C与D每10s访问一次,|代表计算空闲时间的截止点:

~~~~~A~~~~~A~~~~~A~~~~A~~~~~A~~~~~A~~|
~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~|
~~~~~~~~~~C~~~~~~~~~C~~~~~~~~~C~~~~~~|
~~~~~D~~~~~~~~~~D~~~~~~~~~D~~~~~~~~~D|

可以看到,LRU对于A、B、C工作的很好,完美预测了将来被访问到的概率B>A>C,但对于D却预测了最少的空闲时间。

但是,总体来说,LRU算法已经是一个性能足够好的算法了

LRU配置参数

Redis配置中和LRU有关的有三个:

  • maxmemory: 配置Redis存储数据时指定限制的内存大小,比如100m。当缓存消耗的内存超过这个数值时, 将触发数据淘汰。该数据配置为0时,表示缓存的数据量没有限制, 即LRU功能不生效。64位的系统默认值为0,32位的系统默认内存限制为3GB
  • maxmemory_policy: 触发数据淘汰后的淘汰策略
  • maxmemory_samples: 随机采样的精度,也就是随即取出key的数目。该数值配置越大, 越接近于真实的LRU算法,但是数值越大,相应消耗也变高,对性能有一定影响,样本值默认为5。

淘汰策略

淘汰策略即maxmemory_policy的赋值有以下几种:

  • noeviction:如果缓存数据超过了maxmemory限定值,并且客户端正在执行的命令(大部分的写入指令,但DEL和几个指令例外)会导致内存分配,则向客户端返回错误响应
  • allkeys-lru: 对所有的键都采取LRU淘汰
  • volatile-lru: 仅对设置了过期时间的键采取LRU淘汰
  • allkeys-random: 随机回收所有的键
  • volatile-random: 随机回收设置过期时间的键
  • volatile-ttl: 仅淘汰设置了过期时间的键---淘汰生存时间TTL(Time To Live)更小的键

volatile-lru, volatile-random和volatile-ttl这三个淘汰策略使用的不是全量数据,有可能无法淘汰出足够的内存空间。在没有过期键或者没有设置超时属性的键的情况下,这三种策略和noeviction差不多。

一般的经验规则:

  • 使用allkeys-lru策略:当预期请求符合一个幂次分布(二八法则等),比如一部分的子集元素比其它其它元素被访问的更多时,可以选择这个策略。
  • 使用allkeys-random:循环连续的访问所有的键时,或者预期请求分布平均(所有元素被访问的概率都差不多)
  • 使用volatile-ttl:要采取这个策略,缓存对象的TTL值最好有差异

volatile-lru 和 volatile-random策略,当你想要使用单一的Redis实例来同时实现缓存淘汰和持久化一些经常使用的键集合时很有用。未设置过期时间的键进行持久化保存,设置了过期时间的键参与缓存淘汰。不过一般运行两个实例是解决这个问题的更好方法。

为键设置过期时间也是需要消耗内存的,所以使用allkeys-lru这种策略更加节省空间,因为这种策略下可以不为键设置过期时间。

近似LRU算法

我们知道,LRU算法需要一个双向链表来记录数据的最近被访问顺序,但是出于节省内存的考虑,Redis的LRU算法并非完整的实现。Redis并不会选择最久未被访问的键进行回收,相反它会尝试运行一个近似LRU的算法,通过对少量键进行取样,然后回收其中的最久未被访问的键。通过调整每次回收时的采样数量maxmemory-samples,可以实现调整算法的精度。

根据Redis作者的说法,每个Redis Object可以挤出24 bits的空间,但24 bits是不够存储两个指针的,而存储一个低位时间戳是足够的,Redis Object以秒为单位存储了对象新建或者更新时的unix time,也就是LRU clock,24 bits数据要溢出的话需要194天,而缓存的数据更新非常频繁,已经足够了。

Redis的键空间是放在一个哈希表中的,要从所有的键中选出一个最久未被访问的键,需要另外一个数据结构存储这些源信息,这显然不划算。最初,Redis只是随机的选3个key,然后从中淘汰,后来算法改进到了N个key的策略,默认是5个。

Redis3.0之后又改善了算法的性能,会提供一个待淘汰候选key的pool,里面默认有16个key,按照空闲时间排好序。更新时从Redis键空间随机选择N个key,分别计算它们的空闲时间idle,key只会在pool不满或者空闲时间大于pool里最小的时,才会进入pool,然后从pool中选择空闲时间最大的key淘汰掉。

真实LRU算法与近似LRU的算法可以通过下面的图像对比:

浅灰色带是已经被淘汰的对象,灰色带是没有被淘汰的对象,绿色带是新添加的对象。可以看出,maxmemory-samples值为5时Redis 3.0效果比Redis 2.8要好。使用10个采样大小的Redis 3.0的近似LRU算法已经非常接近理论的性能了。

数据访问模式非常接近幂次分布时,也就是大部分的访问集中于部分键时,LRU近似算法会处理得很好。

在模拟实验的过程中,我们发现如果使用幂次分布的访问模式,真实LRU算法和近似LRU算法几乎没有差别。

LRU源码分析

Redis中的键与值都是redisObject对象:

  1. typedef struct redisObject {
  2. unsigned type:4;
  3. unsigned encoding:4;
  4. unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or
  5. * LFU data (least significant 8 bits frequency
  6. * and most significant 16 bits access time). */
  7. int refcount;
  8. void *ptr;
  9. } robj;

unsigned的低24 bits的lru记录了redisObj的LRU time。

Redis命令访问缓存的数据时,均会调用函数lookupKey:

  1. robj *lookupKey(redisDb *db, robj *key, int flags) {
  2. dictEntry *de = dictFind(db->dict,key->ptr);
  3. if (de) {
  4. robj *val = dictGetVal(de);
  5.  
  6. /* Update the access time for the ageing algorithm.
  7. * Don't do it if we have a saving child, as this will trigger
  8. * a copy on write madness. */
  9. if (server.rdb_child_pid == -1 &&
  10. server.aof_child_pid == -1 &&
  11. !(flags & LOOKUP_NOTOUCH))
  12. {
  13. if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
  14. updateLFU(val);
  15. } else {
  16. val->lru = LRU_CLOCK();
  17. }
  18. }
  19. return val;
  20. } else {
  21. return NULL;
  22. }
  23. }

该函数在策略为LRU(非LFU)时会更新对象的lru值, 设置为LRU_CLOCK()值:

  1. /* Return the LRU clock, based on the clock resolution. This is a time
  2. * in a reduced-bits format that can be used to set and check the
  3. * object->lru field of redisObject structures. */
  4. unsigned int getLRUClock(void) {
  5. return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
  6. }
  7.  
  8. /* This function is used to obtain the current LRU clock.
  9. * If the current resolution is lower than the frequency we refresh the
  10. * LRU clock (as it should be in production servers) we return the
  11. * precomputed value, otherwise we need to resort to a system call. */
  12. unsigned int LRU_CLOCK(void) {
  13. unsigned int lruclock;
  14. if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
  15. atomicGet(server.lruclock,lruclock);
  16. } else {
  17. lruclock = getLRUClock();
  18. }
  19. return lruclock;
  20. }

LRU_CLOCK()取决于LRU_CLOCK_RESOLUTION(默认值1000),LRU_CLOCK_RESOLUTION代表了LRU算法的精度,即一个LRU的单位是多长。server.hz代表服务器刷新的频率,如果服务器的时间更新精度值比LRU的精度值要小,LRU_CLOCK()直接使用服务器的时间,减小开销。

Redis处理命令的入口是processCommand:

  1. int processCommand(client *c) {
  2.  
  3. /* Handle the maxmemory directive.
  4. *
  5. * Note that we do not want to reclaim memory if we are here re-entering
  6. * the event loop since there is a busy Lua script running in timeout
  7. * condition, to avoid mixing the propagation of scripts with the
  8. * propagation of DELs due to eviction. */
  9. if (server.maxmemory && !server.lua_timedout) {
  10. int out_of_memory = freeMemoryIfNeededAndSafe() == C_ERR;
  11. /* freeMemoryIfNeeded may flush slave output buffers. This may result
  12. * into a slave, that may be the active client, to be freed. */
  13. if (server.current_client == NULL) return C_ERR;
  14.  
  15. /* It was impossible to free enough memory, and the command the client
  16. * is trying to execute is denied during OOM conditions or the client
  17. * is in MULTI/EXEC context? Error. */
  18. if (out_of_memory &&
  19. (c->cmd->flags & CMD_DENYOOM ||
  20. (c->flags & CLIENT_MULTI && c->cmd->proc != execCommand))) {
  21. flagTransaction(c);
  22. addReply(c, shared.oomerr);
  23. return C_OK;
  24. }
  25. }
  26. }

只列出了释放内存空间的部分,freeMemoryIfNeededAndSafe为释放内存的函数:

  1. int freeMemoryIfNeeded(void) {
  2. /* By default replicas should ignore maxmemory
  3. * and just be masters exact copies. */
  4. if (server.masterhost && server.repl_slave_ignore_maxmemory) return C_OK;
  5.  
  6. size_t mem_reported, mem_tofree, mem_freed;
  7. mstime_t latency, eviction_latency;
  8. long long delta;
  9. int slaves = listLength(server.slaves);
  10.  
  11. /* When clients are paused the dataset should be static not just from the
  12. * POV of clients not being able to write, but also from the POV of
  13. * expires and evictions of keys not being performed. */
  14. if (clientsArePaused()) return C_OK;
  15. if (getMaxmemoryState(&mem_reported,NULL,&mem_tofree,NULL) == C_OK)
  16. return C_OK;
  17.  
  18. mem_freed = 0;
  19.  
  20. if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION)
  21. goto cant_free; /* We need to free memory, but policy forbids. */
  22.  
  23. latencyStartMonitor(latency);
  24. while (mem_freed < mem_tofree) {
  25. int j, k, i, keys_freed = 0;
  26. static unsigned int next_db = 0;
  27. sds bestkey = NULL;
  28. int bestdbid;
  29. redisDb *db;
  30. dict *dict;
  31. dictEntry *de;
  32.  
  33. if (server.maxmemory_policy & (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LFU) ||
  34. server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL)
  35. {
  36. struct evictionPoolEntry *pool = EvictionPoolLRU;
  37.  
  38. while(bestkey == NULL) {
  39. unsigned long total_keys = 0, keys;
  40.  
  41. /* We don't want to make local-db choices when expiring keys,
  42. * so to start populate the eviction pool sampling keys from
  43. * every DB. */
  44. for (i = 0; i < server.dbnum; i++) {
  45. db = server.db+i;
  46. dict = (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) ?
  47. db->dict : db->expires;
  48. if ((keys = dictSize(dict)) != 0) {
  49. evictionPoolPopulate(i, dict, db->dict, pool);
  50. total_keys += keys;
  51. }
  52. }
  53. if (!total_keys) break; /* No keys to evict. */
  54.  
  55. /* Go backward from best to worst element to evict. */
  56. for (k = EVPOOL_SIZE-1; k >= 0; k--) {
  57. if (pool[k].key == NULL) continue;
  58. bestdbid = pool[k].dbid;
  59.  
  60. if (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) {
  61. de = dictFind(server.db[pool[k].dbid].dict,
  62. pool[k].key);
  63. } else {
  64. de = dictFind(server.db[pool[k].dbid].expires,
  65. pool[k].key);
  66. }
  67.  
  68. /* Remove the entry from the pool. */
  69. if (pool[k].key != pool[k].cached)
  70. sdsfree(pool[k].key);
  71. pool[k].key = NULL;
  72. pool[k].idle = 0;
  73.  
  74. /* If the key exists, is our pick. Otherwise it is
  75. * a ghost and we need to try the next element. */
  76. if (de) {
  77. bestkey = dictGetKey(de);
  78. break;
  79. } else {
  80. /* Ghost... Iterate again. */
  81. }
  82. }
  83. }
  84. }
  85.  
  86. /* volatile-random and allkeys-random policy */
  87. else if (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM ||
  88. server.maxmemory_policy == MAXMEMORY_VOLATILE_RANDOM)
  89. {
  90. /* When evicting a random key, we try to evict a key for
  91. * each DB, so we use the static 'next_db' variable to
  92. * incrementally visit all DBs. */
  93. for (i = 0; i < server.dbnum; i++) {
  94. j = (++next_db) % server.dbnum;
  95. db = server.db+j;
  96. dict = (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM) ?
  97. db->dict : db->expires;
  98. if (dictSize(dict) != 0) {
  99. de = dictGetRandomKey(dict);
  100. bestkey = dictGetKey(de);
  101. bestdbid = j;
  102. break;
  103. }
  104. }
  105. }
  106.  
  107. /* Finally remove the selected key. */
  108. if (bestkey) {
  109. db = server.db+bestdbid;
  110. robj *keyobj = createStringObject(bestkey,sdslen(bestkey));
  111. propagateExpire(db,keyobj,server.lazyfree_lazy_eviction);
  112. /* We compute the amount of memory freed by db*Delete() alone.
  113. * It is possible that actually the memory needed to propagate
  114. * the DEL in AOF and replication link is greater than the one
  115. * we are freeing removing the key, but we can't account for
  116. * that otherwise we would never exit the loop.
  117. *
  118. * AOF and Output buffer memory will be freed eventually so
  119. * we only care about memory used by the key space. */
  120. delta = (long long) zmalloc_used_memory();
  121. latencyStartMonitor(eviction_latency);
  122. if (server.lazyfree_lazy_eviction)
  123. dbAsyncDelete(db,keyobj);
  124. else
  125. dbSyncDelete(db,keyobj);
  126. latencyEndMonitor(eviction_latency);
  127. latencyAddSampleIfNeeded("eviction-del",eviction_latency);
  128. latencyRemoveNestedEvent(latency,eviction_latency);
  129. delta -= (long long) zmalloc_used_memory();
  130. mem_freed += delta;
  131. server.stat_evictedkeys++;
  132. notifyKeyspaceEvent(NOTIFY_EVICTED, "evicted",
  133. keyobj, db->id);
  134. decrRefCount(keyobj);
  135. keys_freed++;
  136.  
  137. /* When the memory to free starts to be big enough, we may
  138. * start spending so much time here that is impossible to
  139. * deliver data to the slaves fast enough, so we force the
  140. * transmission here inside the loop. */
  141. if (slaves) flushSlavesOutputBuffers();
  142.  
  143. /* Normally our stop condition is the ability to release
  144. * a fixed, pre-computed amount of memory. However when we
  145. * are deleting objects in another thread, it's better to
  146. * check, from time to time, if we already reached our target
  147. * memory, since the "mem_freed" amount is computed only
  148. * across the dbAsyncDelete() call, while the thread can
  149. * release the memory all the time. */
  150. if (server.lazyfree_lazy_eviction && !(keys_freed % 16)) {
  151. if (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_OK) {
  152. /* Let's satisfy our stop condition. */
  153. mem_freed = mem_tofree;
  154. }
  155. }
  156. }
  157.  
  158. if (!keys_freed) {
  159. latencyEndMonitor(latency);
  160. latencyAddSampleIfNeeded("eviction-cycle",latency);
  161. goto cant_free; /* nothing to free... */
  162. }
  163. }
  164. latencyEndMonitor(latency);
  165. latencyAddSampleIfNeeded("eviction-cycle",latency);
  166. return C_OK;
  167.  
  168. cant_free:
  169. /* We are here if we are not able to reclaim memory. There is only one
  170. * last thing we can try: check if the lazyfree thread has jobs in queue
  171. * and wait... */
  172. while(bioPendingJobsOfType(BIO_LAZY_FREE)) {
  173. if (((mem_reported - zmalloc_used_memory()) + mem_freed) >= mem_tofree)
  174. break;
  175. usleep(1000);
  176. }
  177. return C_ERR;
  178. }
  179.  
  180. /* This is a wrapper for freeMemoryIfNeeded() that only really calls the
  181. * function if right now there are the conditions to do so safely:
  182. *
  183. * - There must be no script in timeout condition.
  184. * - Nor we are loading data right now.
  185. *
  186. */
  187. int freeMemoryIfNeededAndSafe(void) {
  188. if (server.lua_timedout || server.loading) return C_OK;
  189. return freeMemoryIfNeeded();
  190. }

几种淘汰策略maxmemory_policy就是在这个函数里面实现的。

当采用LRU时,可以看到,从0号数据库开始(默认16个),根据不同的策略,选择redisDb的dict(全部键)或者expires(有过期时间的键),用来更新候选键池子pool,pool更新策略是evictionPoolPopulate:

  1. void evictionPoolPopulate(int dbid, dict *sampledict, dict *keydict, struct evictionPoolEntry *pool) {
  2. int j, k, count;
  3. dictEntry *samples[server.maxmemory_samples];
  4.  
  5. count = dictGetSomeKeys(sampledict,samples,server.maxmemory_samples);
  6. for (j = 0; j < count; j++) {
  7. unsigned long long idle;
  8. sds key;
  9. robj *o;
  10. dictEntry *de;
  11.  
  12. de = samples[j];
  13. key = dictGetKey(de);
  14.  
  15. /* If the dictionary we are sampling from is not the main
  16. * dictionary (but the expires one) we need to lookup the key
  17. * again in the key dictionary to obtain the value object. */
  18. if (server.maxmemory_policy != MAXMEMORY_VOLATILE_TTL) {
  19. if (sampledict != keydict) de = dictFind(keydict, key);
  20. o = dictGetVal(de);
  21. }
  22.  
  23. /* Calculate the idle time according to the policy. This is called
  24. * idle just because the code initially handled LRU, but is in fact
  25. * just a score where an higher score means better candidate. */
  26. if (server.maxmemory_policy & MAXMEMORY_FLAG_LRU) {
  27. idle = estimateObjectIdleTime(o);
  28. } else if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
  29. /* When we use an LRU policy, we sort the keys by idle time
  30. * so that we expire keys starting from greater idle time.
  31. * However when the policy is an LFU one, we have a frequency
  32. * estimation, and we want to evict keys with lower frequency
  33. * first. So inside the pool we put objects using the inverted
  34. * frequency subtracting the actual frequency to the maximum
  35. * frequency of 255. */
  36. idle = 255-LFUDecrAndReturn(o);
  37. } else if (server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL) {
  38. /* In this case the sooner the expire the better. */
  39. idle = ULLONG_MAX - (long)dictGetVal(de);
  40. } else {
  41. serverPanic("Unknown eviction policy in evictionPoolPopulate()");
  42. }
  43.  
  44. /* Insert the element inside the pool.
  45. * First, find the first empty bucket or the first populated
  46. * bucket that has an idle time smaller than our idle time. */
  47. k = 0;
  48. while (k < EVPOOL_SIZE &&
  49. pool[k].key &&
  50. pool[k].idle < idle) k++;
  51. if (k == 0 && pool[EVPOOL_SIZE-1].key != NULL) {
  52. /* Can't insert if the element is < the worst element we have
  53. * and there are no empty buckets. */
  54. continue;
  55. } else if (k < EVPOOL_SIZE && pool[k].key == NULL) {
  56. /* Inserting into empty position. No setup needed before insert. */
  57. } else {
  58. /* Inserting in the middle. Now k points to the first element
  59. * greater than the element to insert. */
  60. if (pool[EVPOOL_SIZE-1].key == NULL) {
  61. /* Free space on the right? Insert at k shifting
  62. * all the elements from k to end to the right. */
  63.  
  64. /* Save SDS before overwriting. */
  65. sds cached = pool[EVPOOL_SIZE-1].cached;
  66. memmove(pool+k+1,pool+k,
  67. sizeof(pool[0])*(EVPOOL_SIZE-k-1));
  68. pool[k].cached = cached;
  69. } else {
  70. /* No free space on right? Insert at k-1 */
  71. k--;
  72. /* Shift all elements on the left of k (included) to the
  73. * left, so we discard the element with smaller idle time. */
  74. sds cached = pool[0].cached; /* Save SDS before overwriting. */
  75. if (pool[0].key != pool[0].cached) sdsfree(pool[0].key);
  76. memmove(pool,pool+1,sizeof(pool[0])*k);
  77. pool[k].cached = cached;
  78. }
  79. }
  80.  
  81. /* Try to reuse the cached SDS string allocated in the pool entry,
  82. * because allocating and deallocating this object is costly
  83. * (according to the profiler, not my fantasy. Remember:
  84. * premature optimizbla bla bla bla. */
  85. int klen = sdslen(key);
  86. if (klen > EVPOOL_CACHED_SDS_SIZE) {
  87. pool[k].key = sdsdup(key);
  88. } else {
  89. memcpy(pool[k].cached,key,klen+1);
  90. sdssetlen(pool[k].cached,klen);
  91. pool[k].key = pool[k].cached;
  92. }
  93. pool[k].idle = idle;
  94. pool[k].dbid = dbid;
  95. }
  96. }

Redis随机选择maxmemory_samples数量的key,然后计算这些key的空闲时间idle time,当满足条件时(比pool中的某些键的空闲时间还大)就可以进pool。pool更新之后,就淘汰pool中空闲时间最大的键。

estimateObjectIdleTime用来计算Redis对象的空闲时间:

  1. /* Given an object returns the min number of milliseconds the object was never
  2. * requested, using an approximated LRU algorithm. */
  3. unsigned long long estimateObjectIdleTime(robj *o) {
  4. unsigned long long lruclock = LRU_CLOCK();
  5. if (lruclock >= o->lru) {
  6. return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
  7. } else {
  8. return (lruclock + (LRU_CLOCK_MAX - o->lru)) *
  9. LRU_CLOCK_RESOLUTION;
  10. }
  11. }

空闲时间基本就是就是对象的lru和全局的LRU_CLOCK()的差值乘以精度LRU_CLOCK_RESOLUTION,将秒转化为了毫秒。

参考链接

总结

以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作具有一定的参考学习价值,谢谢大家对w3xue的支持。

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号