您的位置:首页 > 其它

ceph存储 pg归置组处于stuck以及degraded状态解决方案

2015-01-07 09:25 639 查看
由于对ceph的兴趣,我们经常自己搭建ceph集群,可能是单节点,也可能是多节点,但是经常遇到pg归置组异常状态,下面是遇到的一些情况:

1、单节点的时候pg归置组unclean或者degraded

这个时候应该检查,自己是几个osd,副本数是多少,副本的最小值是多少,还有故障域是不是osd

2、多个节点的时候pg归置组unclean或者degraded

这个时候就麻烦了,需要查看log日志,以及osd的dump信息,看一个池子和mds是不是正常,以及对象是否有丢失,最重要就是根据报警信息解决问题

可以使用ceph -s 以及 ceph -w 已经 ceph health detail 以及 ceph osd dump 等查看具体原因

3、当遇到是故障域导致的时候两个方案:

3.1)设置配置文件osd crush chooseleaf type = 0 (默认为1,是host)

3.2)重新编译crush map 找到 chooseleaf 出 然后修改为 osd

4、官方对pg状态总结如下:

http://ceph.com/docs/master/dev/placement-group/

Todo: 图的状态以及如何他们可以重叠

创建

PG仍在创建

活跃

到PG的请求将被处理

清理

在PG的所有对象复制正确的次数

下载

必要的数据是一个副本下来,所以PG是离线

重播

PG正等待客户端的OSD坠毁后重放操作

分裂

在PG分割成多个PG的(不是功能2012-02)

擦洗

PG被检查不一致

降级

在PG的一些对象没有足够的时间复制

不一致

PG的副本并不一致(例如,对象是错误的大小,对象是从一个副本丢失恢复完成后,等)

peering

在PG经历peering过程

修复

正在检查的PG将被修复发现的任何不一致(如果可能的话)

恢复

正在迁移/同步对象的复制品

恢复_等待

PG正等待本地/远程恢复预订

回填

一个特殊的恢复的情况下,在PG的全部内容进行扫描和同步,代替推断需要传送从PG最近的操作日志

回填_等待

PG正排队等候,开始回填

backfill_toofull

拒绝回填预订,OSD太满

残缺

一个PG缺少必要的历史时期,从它的日志。如果您看到此状态下,报告错误,并尝试启动失败的OSD可能包含所需信息。

陈旧

PG是在一个未知的状态 - 监视器没有收到更新,因为PG映射改变。

重新映射

PG暂时从什么CRUSH指定映射到一组不同的OSD

5、收集的一些关于pg归置组的解决方案

Placement
groups

A Placement Group (PG) aggregates a series of objects into a group, and maps the group to a series
of OSDs.

Tracking object placement and object metadata on a per-object basis is computationally expensive–i.e.,
a

system with millions of objects cannot realistically track placement on a per-object basis. Placement
groups

address this barrier to performance and scalability. Additionally, placement groups reduce the number
of

processes and the amount of per-object metadata Ceph must track when storing and retrieving data.

一个归置组(PG)把一系列对象汇聚到一组,并且把这个组映射到一系列OSD。跟踪每个对象的位置和元数据

需要大量计算。 例如, 一个拥 有数百万对象的系统,不可能在每对象级追踪位置。 归置组可应 对这个影响 性能

和扩展性的问题,另 外, 归置组减小了ceph存储、检索数据时必须追踪的每对象元数据的处理量和尺寸。

Each placement group requires some amount of system resources:

每个归置组都需要一定量系统资源:

Directly: Each PG requires some amount of memory and CPU.

Indirectly: The total number of PGs increases the peering count.

Increasing the number of placement groups reduces the variance in per-OSD load across your cluster.
We

recommend approximately 50-100 placement groups per OSD to balance out memory and CPU requirements

and per-OSD load. For a single pool of objects, you can use the following formula:

直接地:每个PG需要一些内存和CPU;

间接地:PG总
量增加了连接建立数量;

增加PG数量能减小集群内每个OSD间的变迁,我们
推荐每个OSD大约50-100个归置组,
以均衡内存、CPU

需求、和每OSD负
载。 对于单存储池里的对象,你可用下面的公式:

(OSDs * 100)

Total PGs = ------------

Replicas

When using multiple data pools for storing objects, you need to ensure that you balance the
number of

placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable

total number of placement groups that provides reasonably low variance per OSD without taxing system

resources or making the peering process too slow.

当 用 了多 个数据存储池来存储数据时,你得确保均衡每个存储池的 归置组数量、 且归置组数量分摊到每个

OSD,
这样才能达到较合理的归置组总 量,并因此使得每个 OSD无需耗费过多
系统资源或拖慢连接进程就能

实现较小变迁。

3.3.8.1设置归置组数量

SET THE NUMBER OF PLACEMENT GROUPS

To set the number of placement groups in a pool, you must specify the number of placement groups
at the

time you create the pool.

你必须在创建存储池时设置一个存储池的归置组数量。

See Create a Pool for details.

详情参见创建一个存储池。

3.3.8.2获取归置组数量

GET THE NUMBER OF PLACEMENT GROUPS

To get the number of placement groups in a pool, execute the following:

要获取一个存储池的归置组数量,执行命令:

ceph osd pool get {pool-name} pg_num

3.3.8.3获取归置组统计信息

GET A CLUSTER’S PG STATISTICS

To get the statistics for the placement groups in your cluster, execute the following:

要获取集群里归置组的统计信息,执行命令:

ceph pg dump [--format {format}]

Valid formats are plain (default) and json.

可用 格式有纯文本(默认)和json。

3.3.8.4获取卡住的归置组统计信息

GET STATISTICS FOR STUCK PGS

To get the statistics for all placement groups stuck in a specified state, execute the following:

要获取所有卡在某状态的归置组统计信息,执行命令:

ceph pg dump_stuck inactive|unclean|stale [--format <format>] [-t|--threshold <seconds>]

Inactive Placement groups cannot process reads or writes because they are waiting for an OSD with
the most

up-to-date data to come up and in.

inactive(不活跃)归置组不能处理读写,因
为 它们 在等待一个有最新数据的 OSD复活且进入集群。

Unclean Placement groups contain objects that are not replicated the desired number of times. They
should

be recovering.

unclean(不干净)归置组含有复制数未达到期望数量的对象,它们应该在恢复中。

Stale Placement groups are in an unknown state - the OSDs that host them have not reported to the
monitor

cluster in a while (configured by mon_osd_report_timeout).

stale(不新鲜)归置组处于未知状态:存储它们
的 OSD有段时间没向
监视器报告了 (由

mon_osd_report_timeout配置)


Valid formats are plain (default) and json. The threshold defines the minimum number of seconds
the

placement group is stuck before including it in the returned statistics (default 300 seconds).

可用 格式有纯文本(默认)和json。
阀值定义的是, 归置组被认为 卡住前等待的最小时间 (默认 300秒)


3.3.8.5获取归置组图

GET A PG MAP

To get the placement group map for a particular placement group, execute the following:

要获取一个具体归置组的归置组图,执行命令:

ceph pg map {pg-id}

For example:

例如:

ceph pg map 1.6c

Ceph will return the placement group map, the placement group, and the OSD status:

ceph将返回
归置组图 、归置组、和OSD状态:

osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]

3.3.8.6获取一个PG的统计信息

GET A PGS STATISTICS

To retrieve statistics for a particular placement group, execute the following:

要查看一个具体归置组的统计信息,执行命令:

ceph pg {pg-id} query

3.3.8.7洗刷一个归置组

Scrub a placement group

To scrub a placement group, execute the following:

要洗刷 一个归置组,执行命令:

ceph pg scrub {pg-id}

Ceph checks the primary and any replica nodes, generates a catalog of all objects in the placement
group and

compares them to ensure that no objects are missing or mismatched, and their contents are consistent.

Assuming the replicas all match, a final semantic sweep ensures that all of the snapshot-related
object

metadata is consistent. Errors are reported via logs.

ceph检查原
始的和任何复制节点, 生成归置组里所有对象的目录,然后再对比,确保没有对象丢失或不匹 配,

并且它们 的内 容一致。

3.3.8.8恢复丢失的

REVERT LOST

If the cluster has lost one or more objects, and you have decided to abandon the search for the
lost data, you

must mark the unfound objects as lost.

如果集群丢了 一或多 个对象,而且必须放弃搜索这些数据,你就要把未找到的对象标记为 丢失。

If all possible locations have been queried and objects are still lost, you may have to give up
on the lost

objects. This is possible given unusual combinations of failures that allow the cluster to learn
about writes

that were performed before the writes themselves are recovered.

如果所有可能的位置都查询过了,而仍找不到这些对象,你也许得放弃它们了。 这可能是罕见的失败组合导致

的, 集群在写入完成前,未能得知写入是否已执行。

Currently the only supported option is “revert”, which will either roll back to a previous version
of the object

or (if it was a new object) forget about it entirely. To mark the “unfound” objects as “lost”, execute
the

following:

当 前只 支持revert选项,它使得回
滚到对象的前一个版本(如果它是新对象) 或完全忽略它。要把unfound

对象标记为lost,执行命令:

ceph pg {pg-id} mark_unfound_lost revert

Important Use this feature with caution, because it may confuse applications that expect the object(s)

to exist.

重要: 要谨慎使用 ,它可能迷惑那些期望对象存在的应 用 程序。

3.3.9 CRUSH图

CRUSH MAPS

The CRUSH algorithm determines how to store and retrieve data by computing data storage locations.

CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized
server

or broker. With an algorithmically determined method of storing and retrieving data, Ceph avoids
a single

point of failure, a performance bottleneck, and a physical limit to its scalability.

CRUSH算法通过计算数据存储位置来确
定如何存储和检索。CRUSH授权ceph客户端直接连接OSD,而非通

过一个中央服务器或经纪人。 数据存储、 检索算法的使用 ,使ceph避免了
单点失败、性能瓶颈、和伸 缩的物

理限制。

CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and retrieve
data

in OSDs with a uniform distribution of data across the cluster. For a detailed discussion of CRUSH,
see

CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data

CRUSH需要一张集群的地图,
且使用CRUSH把数据伪随机地存储、检索于整个集群的OSD里。CRUSH的讨

论详情参见CRUSH
- Controlled, Scalable, Decentralized Placement of Replicated Data。

CRUSH Maps contain a list of OSDs, a list of ‘buckets’ for aggregating the devices into physical
locations, and

a list of rules that tell CRUSH how it should replicate data in a Ceph cluster’s pools. By reflecting
the

underlying physical organization of the installation, CRUSH can model—and thereby address—potential

sources of correlated device failures. Typical sources include physical proximity, a shared power
source, and a

shared network. By encoding this information into the cluster map, CRUSH placement policies can
separate

object replicas across different failure domains while still maintaining the desired distribution.
For example,

to address the possibility of concurrent failures, it may be desirable to ensure that data replicas
are on devices

in different shelves, racks, power supplies, controllers, and/or physical locations.

CRUSH图
包含OSD列表、把设备汇聚为
物理位置的“桶” 列表、和指示CRUSH如何复制存储池里的数据的

规则 列表。 由于对所安装底层物理组织的表达,CRUSH能模型化、
并因此定位到潜在的相 关失败设备源头,

典型的源头有物理距离、共享电 源、和共享网络, 把这些信息编码到集群图里,CRUSH归置策略可把对象副

本分离到不同的失败域, 却仍能保持期望的分布。 例如,要定位同时失败的可能性,可能希望保证数据复制到

的设备位于不同机架、 不同 托盘、 不同 电 源、 不同控制器、 甚至不同 物理位置。

When you create a configuration file and deploy Ceph with mkcephfs, Ceph generates a default CRUSH
map

for your configuration. The default CRUSH map is fine for your Ceph sandbox environment. However,
when

you deploy a large-scale data cluster, you should give significant consideration to developing a
custom

CRUSH map, because it will help you manage your Ceph cluster, improve performance and ensure data
safety.

当你写好配置文件, 用mkcephfs部署ceph后,它
生成了 一个默认的CRUSH图,
对于你的沙盒环境来说它很

好。然而, 部署一个大规模数据集群的时候,应该好好设计自己 的CRUSH图,因
为 它帮你管理ceph集群、

提升性能、和保证数据安全性。

For example, if an OSD goes down, a CRUSH Map can help you can locate the physical data center,
room, row

and rack of the host with the failed OSD in the event you need to use onsite support or replace
hardware.

例如,如果一个OSD挂了,CRUSH图可帮你定位此事件中OSD所在主机的物理数据中心、
房间、行和机架,

据此你可以请求在线支持或替换硬件。

Similarly, CRUSH may help you identify faults more quickly. For example, if all OSDs in a particular
rack go

down simultaneously, the fault may lie with a network switch or power to the rack or the network
switch

rather than the OSDs themselves.

类似地,CRUSH可帮你更快地找出问题。
例如, 如果一个机架上的所有OSD同时挂了,问题可能在于机架的

交换机或电 源,而非OSD本身。

A custom CRUSH map can also help you identify the physical locations where Ceph stores redundant
copies

of data when the placement group(s) associated with a failed host are in a degraded state.

定制的CRUSH图也能在归置组降级时,帮你找出
冗余副 本所在主机的物理位置。

Inktank provides excellent premium support for developing CRUSH maps.

Inktank提供优秀的商业支持,帮您
开发CRUSH图。

Note Lines of code in example boxes may extend past the edge of the box. Please scroll when reading
or

copying longer examples.

注意:文本框里的代码实例可能超出了 边界, 读或拷贝时注意滚动。

3.3.9.1编辑CRUSH图

EDITING A CRUSH MAP

To edit an existing CRUSH map:

要编辑现有的CRUSH图


Get the CRUSH Map.

Decompile the CRUSH Map.

Edit at least one of Devices, Buckets and Rules.

Recompile the CRUSH Map.

Set the CRUSH Map.

获取CRUSH图;
反编译CRUSH图;至少
编辑一个设备、 桶 、 规则 ; 重编译CRUSH图;
应 用CRUSH图。

To activate CRUSH Map rules for a specific pool, identify the common ruleset number for those rules
and

specify that ruleset number for the pool. See Set Pool Values for details.

要激活CRUSH图里某存储池的规则,找到通用
规则 集编号,然后把它指定到那个规则 集。详情参见设置存储

池的值。

3.3.9.1.1获取CRUSH图

GET A CRUSH MAP

To get the CRUSH Map for your cluster, execute the following:

要获取集群的CRUSH图,执行命令:

ceph osd getcrushmap -o {compiled-crushmap-filename}

Ceph will output (-o) a compiled CRUSH Map to the filename you specified. Since the CRUSH Map is
in a

compiled form, you must decompile it first before you can edit it.

ceph将把CRUSH输出(-o)到你指定的文件,
由于 CRUSH图是已编译的,
所以编辑前必须先反编译。

3.3.9.1.2反编译CRUSH图

DECOMPILE A CRUSH MAP

To decompile a CRUSH Map, execute the following:

要反编译CRUSH图,执行命令:

crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}

Ceph will decompile (-d) the compiled CRUSH map and output (-o) it to the filename you specified.

ceph将反编译(-d)二进制CRUSH图,
且输出(-o)到你指定的文件。

3.3.9.1.3编译CRUSH图

COMPILE A CRUSH MAP

To compile a CRUSH Map, execute the following:

要编译CRUSH图,执行命令:

crushtool -c {decompiled-crush-map-filename} -o {compiled-crush-map-filename}

Ceph will store a compiled CRUSH map to the filename you specified.

ceph将把已编译的CRUSH图保存到你指定的文件。

3.3.9.1.4设置CRUSH图

SET A CRUSH MAP

To set the CRUSH Map for your cluster, execute the following:

要把CRUSH图应
用到集群,执行命令:

ceph osd setcrushmap -i {compiled-crushmap-filename}

Ceph will input the compiled CRUSH Map of the filename you specified as the CRUSH Map for the cluster.

ceph将把你指定的已编译CRUSH图输入到集群。

3.3.9.2 CRUSH图参数

CRUSH MAP PARAMETERS

There are three main sections to a CRUSH Map.

CRUSH图
主要有3个主要段落。

Devices consist of any object storage device–i.e., the hard disk corresponding to a ceph-osd daemon.

Buckets consist of a hierarchical aggregation of storage locations (e.g., rows, racks, hosts, etc.)
and their

assigned weights.

Rules consist of the manner of selecting buckets

devices由任意对象存储设备组成,如对应
一个ceph-osd进程的硬盘;

buckets由
分级汇聚的存储位置(如行、机架、主机等)及其权重组成;

rules由
选择桶 的方法组成。

3.3.9.2.1 CRUSH图之设备

CRUSH MAP DEVICES

To map placement groups to OSDs, a CRUSH Map requires a list of OSD devices (i.e., the name of the
OSD

daemon). The list of devices appears first in the CRUSH Map.

为把归置组映射到OSD,CRUSH图
需要OSD列表(如OSD守护进程名
称) , 所以它们首先出现在CRUSH

图里。

#devices

device {num} {osd.name}

For example:

例如:

#devices

device 0 osd.0

device 1 osd.1

device 2 osd.2

device 3 osd.3

As a general rule, an OSD daemon maps to a single disk or to a RAID.

一般来说, 一个OSD映射到一个单独的硬盘或RAID。

3.3.9.2.2 CRUSH图之桶

CRUSH MAP BUCKETS

CRUSH maps support the notion of ‘buckets’, which may be thought of as nodes that aggregate other
buckets

into a hierarchy of physical locations, where OSD devices are the leaves of the hierarchy. The following
table

lists the default types.

CRUSH图
支持bucket概念,可以认为
是把其他桶汇聚为分级物理位置的节点,OSD设备是此分级结构的叶子,

下表列出了 默认类型。

Type Location | Description

0 OSD An OSD daemon (e.g., osd.1, osd.2, etc).

1 Host A host name containing one or more OSDs.

2 Rack A computer rack. The default isunknownrack.

Type Location | Description

3 Row A row in a series of racks.

4 Room A room containing racks and rows of hosts.

5 Data Center A physical data center containing rooms.

6 Pool A data storage pool for storing objects.

Tip You can remove these types and create your own bucket types.

提示: 你可以删 除这些类型,并创建自己 的桶类型。

Ceph’s deployment tools generate a CRUSH map that contains a bucket for each host, and a pool named

“default,” which is useful for the default data, metadata and rbd pools. The remaining bucket types
provide a

means for storing information about the physical location of nodes/buckets, which makes cluster

administration much easier when OSDs, hosts, or network hardware malfunction and the administrator
needs

access to physical hardware.

ceph部署工具生成的CRUSH图
包含:每主机一个桶 、 名 为default的存储池(对默认数据、元数据、rbd存

储池有用) , 其余桶类型提供的存储信息是关于节点 、 桶 的物理位置的,它简化了OSD、主机、网络硬件故

障时集群管理员处理的步骤。

A bucket has a type, a unique name (string), a unique ID expressed as a negative integer, a weight
relative to

the total capacity/capability of its item(s), the bucket algorithm (straw by default), and the hash
(0 by

default, reflecting CRUSH Hash rjenkins1). A bucket may have one or more items. The items may consist
of

other buckets or OSDs. Items may have a weight that reflects the relative weight of the item.

一个桶 有类型、 唯一的名字(字符串 )、 表示为 负 数的唯一ID、和能力相
关的权重、 桶算法(默认微不足

道)、和哈希(默认0,反映CRUSH
Hash rjenkins1) 。 一个桶可能有一或多 个条目 。条目 由 其他桶或
OSD

组成,也可以有反映条目 相 对权重的权重。

[bucket-type] [bucket-name] {

id [a unique negative numeric ID]

weight [the relative capacity/capability of the item(s)]

alg [the bucket type: uniform | list | tree | straw ]

hash [the hash type: 0 by default]

item [item-name] weight [weight]

}

The following example illustrates how you can use buckets to aggregate a pool and physical locations
like a

datacenter, a room, a rack and a row.

下例阐述了你该如何用 桶汇聚存储池和物理位置, 像数据中心、 房间、机架、机架行。

host ceph-osd-server-1 {

id -17

alg straw

hash 0

item osd.0 weight 1.00

item osd.1 weight 1.00

}

row rack-1-row-1 {

id -16

alg straw

hash 0

item ceph-osd-server-1 2.00

}

rack rack-3 {

id -15

alg straw

hash 0

item rack-3-row-1 weight 2.00

item rack-3-row-2 weight 2.00

item rack-3-row-3 weight 2.00

item rack-3-row-4 weight 2.00

item rack-3-row-5 weight 2.00

}

rack rack-2 {

id -14

alg straw

hash 0

item rack-2-row-1 weight 2.00

item rack-2-row-2 weight 2.00

item rack-2-row-3 weight 2.00

item rack-2-row-4 weight 2.00

item rack-2-row-5 weight 2.00

}

rack rack-1 {

id -13

alg straw

hash 0

item rack-1-row-1 weight 2.00

item rack-1-row-2 weight 2.00

item rack-1-row-3 weight 2.00

item rack-1-row-4 weight 2.00

item rack-1-row-5 weight 2.00

}

room server-room-1 {

id -12

alg straw

hash 0

item rack-1 weight 10.00

item rack-2 weight 10.00

item rack-3 weight 10.00

}

datacenter dc-1 {

id -11

alg straw

hash 0

item server-room-1 weight 30.00

item server-room-2 weight 30.00

}

pool data {

id -10

alg straw

hash 0

item dc-1 weight 60.00

item dc-2 weight 60.00

}

3.3.9.2.3 CRUSH图之规则

CRUSH MAP RULES

CRUSH maps support the notion of ‘CRUSH rules’, which are the rules that determine data placement
for a

pool. For large clusters, you will likely create many pools where each pool may have its own CRUSH
ruleset

and rules. The default CRUSH map has a rule for each pool, and one ruleset assigned to each of the
default

pools, which include:

CRUSH图
支持CRUSH规则概念,
用以确 定一个存储池里数据的归置。 对大型集群来说,你可能创建很多 存

储池, 且每个存储池都有它自己 的CRUSH规则
集和规则。 默认的CRUSH图里,
每个存储池有一条规则 、一

个规则 集被分配到每个默认存储池,它们 有:

•data

•metadata

•rbd

Note In most cases, you will not need to modify the default rules. When you create a new pool, its
default

ruleset is 0.

注意: 大多 数情况下,你都不需要修改默认规则 。新创建存储池的默认规则 集是0。

A rule takes the following form:

规则 格式如下:

rule [rulename] {

ruleset [ruleset]

type [type]

min_size [min-size]

max_size [max-size]

step [step]

}

ruleset

Description: A means of classifying a rule as belonging to a set of rules. Activated by setting
the ruleset in a

pool.

Purpose: A component of the rule mask.

Type: Integer

Required: Yes

Default: 0

描述:区分一条规则 属于某个规则 集合的手段。 在存储池里设置ruleset后激活。

type

Description: Describes a rule for either a hard disk (replicated) or a RAID.

Purpose: A component of the rule mask.

Type: String

Required: Yes

Default: replicated

Valid Values: Currently only replicated

描述: 为硬盘(复制的) 或RAID写一条规则。

min_size

Description: If a placement group makes fewer replicas than this number, CRUSH will NOT select this
rule.

Type: Integer

Purpose: A component of the rule mask.

Required: Yes

Default: 1

描述: 如果一个归置组副 本数小于此数,CRUSH将不应
用 此规则。

max_size

Description: If a placement group makes more replicas than this number, CRUSH will NOT select this
rule.

Type: Integer

Purpose: A component of the rule mask.

Required: Yes

Default: 10

描述: 如果一个归置组副 本数大于此数,CRUSH将不应
用 此规则。

step take {bucket}

Description: Takes a bucket name, and begins iterating down the tree.

Purpose: A component of the rule.

Required: Yes

Example: step take data

描述: 选取桶名并类推到树底。

step choose firstn {num} type {bucket-type}

Description: Selects the number of buckets of the given type. Where N is the number of options available,
if

{num} > 0 && < N, choose that many buckets; if {num} < 0, it means N - {num}; and, if {num} == 0,

choose N buckets (all available).

Purpose: A component of the rule.

Prerequisite: Follows step take or step choose.

Example: step choose firstn 1 type row

描述:选取指定类型的桶 数量,这里N是可用选择的数量,如果{num}>0
&& <N 就选择那么 多 的桶; 如果

{num}<0它 意为

N-{num}; 如果{num}==0
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐