You can get the unique values by converting the list to a set. My function populates the keys so that they are unique, but my list contains duplicates. The execution time was a little slower, but still comparable. Because it inherits from a list, it basically acts like a list, so you can use functions like index etc. These are then used in your function. Optimizing attribute lookup can give 15% speedup. How do I get only unique values? Get max values based on unique values in another list - python Possibly Related Threads.
Finally, the contents of the returned set are sorted and returned. Can someone point me in the right direction so that my dictionary value does not contain duplicate elements? The final line of code is the key to this function. Unless axis is specified, this will be flattened if it is not already 1-D. That assumes a pure functional environment. Any help would be appreciated.
Example 2: Using Numpy with Arcpy The second example uses the numpy module with Arcpy to deliver the same results, using a different method. To learn more, see our. This shapefile includes 25,388 records and a field called FireType that contains an indicator for how the fire started. I want to create a list or set of all unique values appearing in a list of lists in python. The table can be a stand-alone table or feature class. The 'X' instances always compare different so everything is unique. If you don't know, f5 is probably a safe bet.
Returns the sorted unique elements of an array. This is because during the list comprehension. Or would that be too 'expensive'? Similar fashion can be seen in the 2nd approach with the reduce method. Well, yes and here is where the 2nd type of short-circuit operators come to play. I'm not sure about performance but as far as readability goes I like the simplicity of this one. I guess I should have posted this in my original question.
For everyone having problems figuring it out. The example you provided does not correspond to lists in Python. Some of the ordering is changed but that could be understandable since we are removing dups. One method is to see if it's in the list prior to adding, which is what Tony's answer does. There is lots to play around with length of items and loop counts! So if you know what your data looks like, you might be able do better than f5.
I haven't looked at memory usage. Thanks to the or statement. You can get more information on the. Set class used in f6 is implemented in Python, both in 2. My function populates the keys so that they are unique, but my list contains duplicates.
Removing continue statements can give another 5%. Would you like to answer one of these instead? Numpy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. Set, use set built-in for all functions. Not the answer you're looking for? Removing duplicate transmissions is in that category probably ought to fix your protocol if that's the case, though , but if there's others I can't think of them. The slowest function was 78 times slower than the fastest function. Keep in mind that there are certainly other ways that this can be done, but both these methods execute quickly and are easy to code.
However, the memory footprint of using numpy arrays is generally thought to be much smaller. I wrote a couple of alternative implementations and did a quick benchmark loop on the various implementations to find out which way was the fastest. I guess I should have posted this in my original question. If you want to count unique occurances of an iterator you can't return a generator. Test it before arguing - maybe it was good. Lists are ordered collection of elements. Here is alternative order-preserving function.