Elasticsearch 查询超10000的几种解决方案

2024-04-22  本文已影响0人  传达室马大爷

Elasticsearch exception [type=illegal_argument_exception, reason=Result window is too large, from + size must be less than or equal to: [10000] but was [10003]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.]

解决办法:

1)设置 max_result_size

# 调大查询窗口大小,比如100w
PUT index_name/_settings
{
  "index.max_result_window": "1000000"
}

2)使用游标方式查询

/**
 * 游标分页查询
 * @param indexName 索引名称
 * @param condition 条件
 * @param clazz 返回对象类型
 * @return 分页后数据列表
 */
WrapRowCount<T> getPagerDataList(String indexName,
                                 EsBaseQueryCondition condition,
                                 Class<T> clazz,
                                 String searchAfterSorts);
                                 

                                 

public class WrapRowCount<T> {

    private List<T> result;

    private int rowCount;

    private String searchAfterSorts;
}
@Override
public WrapRowCount<T> getPagerDataList(String indexName,
                                        EsBaseQueryCondition queryCondition,
                                        Class<T> clazz,
                                        String searchAfterSorts) {
    EsBaseQueryCondition condition = queryCondition.clone();
    try {
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
        sourceBuilder.trackTotalHits(true)
                .fetchSource(condition.getOutPutFields(), null);
        if (StringUtils.isNotBlank(searchAfterSorts)) {
            String[] objects = Stream.of(searchAfterSorts.split(",")).toArray(String[]::new);
            sourceBuilder.searchAfter(objects);
        }
        int pageSize = condition.getPageSize();
        if (condition.isQueryAllSeries()) {
            pageSize = MAX_COUNT;
        }
        sourceBuilder.from(0).size(pageSize);
        paddingFilterCondition(sourceBuilder, condition);
        paddingSort(sourceBuilder, condition, false);
        SearchRequest rq = new SearchRequest();
        // 索引
        rq.indices(indexName);
        rq.source(sourceBuilder);
        log.info("ES查询条件:" + sourceBuilder.toString());
        return getPagerDataWrapRowCount(clazz, rq);
    } catch (Exception e) {
        log.error("ES查询信息异常, condition:{}", JsonHelper.serialize(condition), e);
        throw new EsQueryException(e);
    }
}

private WrapRowCount<T> getPagerDataWrapRowCount(Class<T> clazz,
                                                 SearchRequest rq) throws IOException {
    long startTime = System.currentTimeMillis();
    SearchResponse rp = esRestClientContainer().fetchHighLevelClient().search(rq, RequestOptions.DEFAULT);
    log.info("ES查询结果,总耗时 : {}, result : {}", (System.currentTimeMillis() - startTime), rp.toString());
    List<T> dataList = new ArrayList<>();
    WrapRowCount<T> wrapRowCount = new WrapRowCount<>();
    for (SearchHit hit : rp.getHits().getHits()) {
        T data = JsonHelper.deSerialize(hit.getSourceAsString(), clazz);
        dataList.add(data);
        Object[] sortValues = hit.getSortValues();
        if (ArrayUtils.isNotEmpty(sortValues)) {
            wrapRowCount.setSearchAfterSorts(Joiner.on(",").join(sortValues));
        }
    }
    int totalCount = (int) rp.getHits().getTotalHits().value;
    wrapRowCount.setRowCount(totalCount);
    wrapRowCount.setResult(dataList);
    return wrapRowCount;
}

分组后分页游标方式查询无法使用

3)使用书签查询
用过关系型数据库都知道,假设我们有个 user 表,其中 id 为自增主键,在类似下面的 SQL 语句中:

select id, name from user limit 100000, 10;

涉及到较为深度的分页时,所以通常我们可以这样优化下 SQL 语句:

select id, name from user where id > 100000 limit 10;

通过走索引的范围查找,相较于全表扫描式,性能肯定就更好了。
借鉴这种思路,还是以上面 es 中的索引数据,我们可以通过限定字段进行排序后来作为书签进而分页。

public <T, IF, GF> void syncData(String esIndexName,
                                 List<T> entityList,
                                 Function<T, IF> idFunction,
                                 String idFieldName,
                                 Function<T, GF> groupFunction,
                                 String groupFieldName) {
    Set<GF> allGroupIds = new HashSet<>();
    Map<GF, List<T>> entityMap = entityList.stream().collect(Collectors.groupingBy(groupFunction));
    allGroupIds.addAll(entityMap.keySet());
    // 查询所有分组id列表
    EsBaseQueryCondition condition = new EsBaseQueryCondition();
    condition.setPageSize(500);
    condition.setBookmarkFieldName(fieldName);
    condition.setBookmarkValue(0);
    EsSortCondition esSortCondition = new EsSortCondition(fieldName, SortOrder.ASC, EsSortTypeEnum.NUMBER_SORT);
    condition.setSortConditions(Collections.singletonList(esSortCondition));
    List<GF> groupIds = defaultEsBaseQueryDao.getSingleValuePagerGroupByField(esIndexName, groupFieldName, condition);
    allGroupIds.addAll(groupIds);
    esOperationStrategyService.syncData(esIndexName, idFunction, idFieldName, groupFieldName, entityMap, allGroupIds);
}

@Override
public <K> List<K> getSingleValuePagerGroupByField(String indexName, String fieldName, EsBaseQueryCondition queryCondition) {
    EsBaseQueryCondition condition = queryCondition.clone();
    try {
        int totalNum = getGroupResultCount(indexName, fieldName, condition);
        int pageSize = MathUtil.middle(100, MAX_COUNT, condition.getPageSize());
        int pageNum = PageUtil.totalPage(totalNum, pageSize);
        List<K> dataList = new ArrayList<>();
        for (int pageIndex = 1; pageIndex <= pageNum; ++pageIndex) {
            int start = (pageIndex - 1) * pageSize;
            SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder()
                    .from(start)
                    .size(pageSize)
                    .fetchSource(new String[]{fieldName}, null)
                    .collapse(new CollapseBuilder(fieldName));
            SearchResponse rp = getSearchResponse(indexName, condition, searchSourceBuilder);
            List<K> subDatList = Stream.of(rp.getHits().getHits())
                    .map(hit -> (K) hit.getSourceAsMap().get(fieldName))
                    .filter(Objects::nonNull)
                    .collect(Collectors.toList());
            if (CollectionUtils.isNotEmpty(subDatList) && StringUtils.isNotBlank(condition.getBookmarkFieldName())) {
                K lastData = subDatList.get(subDatList.size() - 1);
                condition.setBookmarkValue(lastData);
            }
            dataList.addAll(subDatList);
        }
        return dataList;
    } catch (Exception e) {
        log.error("ES查询信息异常: condition:{}", JsonHelper.serialize(queryCondition), e);
        throw new EsQueryException(e);
    }
}


private BoolQueryBuilder getBooleanQueryBuilder(EsBaseQueryCondition condition) {
    Map<String, Object> jsonMap = JSONUtil.parseObj(condition);
    BoolQueryBuilder queryBuilder = condition.createConditionQueryBuilder();
    Field[] fields = ReflectUtil.getFields(condition.getClass());
    for (Field field : fields) {
        paddingCondition(queryBuilder, field, jsonMap);
    }

    String bookmarkFieldName = condition.getBookmarkFieldName();
    Object bookmarkValue = condition.getBookmarkValue();
    boolean isCondition = StringUtils.isNotBlank(bookmarkFieldName) && bookmarkValue != null;
    EsUtils.paddingRangeFromExcludeFrom(queryBuilder, isCondition, bookmarkFieldName, bookmarkValue);

    return queryBuilder;
}
上一篇 下一篇

猜你喜欢

热点阅读