Hướng dẫn dùng duplicate remover python
If using a third-party package would be okay then you could use
It preserves the order of the original list and ut can also handle unhashable items like dictionaries by falling back on a slower algorithm ( In the case of a dictionary (which compares independent of order) you need to map it to another data-structure that compares like that, for example
Note that you shouldn't use a simple
And even sorting the tuple might not work if the keys aren't sortable:
BenchmarkI thought it might be useful to see how the performance of these approaches compares, so I did a small benchmark. The benchmark graphs are time vs. list-size based on a list containing no duplicates (that was chosen arbitrarily, the runtime doesn't change significantly if I add some or lots of duplicates). It's a log-log plot so the complete range is covered. The absolute times: The timings relative to the fastest approach: The second approach from thefourtheye is fastest here. The The code to reproduce the benchmarks:
For completeness here is the timing for a list containing only duplicates:
The timings don't change significantly except for Disclaimer: I'm the author of |