list去重
2021-11-01 本文已影响0人
少年眼蓝不及海
java8对List<Bean>进行去重和覆盖
不关心覆盖逻辑,相同personId只留一条
public static List<Person> coverDuplicate(List<Person> sourceList) {
if (CollectionUtils.isEmpty(sourceList)) {
return new ArrayList<>();
}
List<Person> distinctList = sourceList.stream().collect(
Collectors.collectingAndThen(
Collectors.toCollection(
() -> new TreeSet<>(Comparator.comparing(o -> o.getPersonId()))), ArrayList::new)
);
return distinctList;
}
相同的personId,后面的记录要求覆盖前面的
public static List<Person> coverDuplicate1(List<Person> sourceList) {
if (CollectionUtils.isEmpty(sourceList)) {
return new ArrayList<>();
}
List<Person> distinctList = sourceList.stream().collect(
Collectors.toMap(Person::getPersonId, Function.identity(), (e1, e2) -> e2)
).values().stream().collect(Collectors.toList());
return distinctList;
}
list string去重
1、java8
list.stream().distinct().collect(Collectors.toList());
2、借助Set的特性进行去重,由于Set的无序性,不会保持原来顺序
/**
* 去除重复数据
* @param list
*/
public static List<String> list distinct(List<String> list) {
final boolean sta = null != list && list.size() > 0;
List doubleList= new ArrayList();
if (sta) {
Set set = new HashSet();
set.addAll(list);
doubleList.addAll(set);
}
return doubleList;
}
3、利用set集合特性保持顺序一致去重
// Set去重并保持原先顺序的两种方法
public static void delRepeat(List<String> list) {
//方法一
List<String> listNew = new ArrayList<String>(new TreeSet<String>(list));
//方法二
List<String> listNew2 = new ArrayList<String>(new LinkedHashSet<String>(list));
}