I've an ITEM table and among the column as CREATED_DATE. Inside a clustered enviroment, many copies of services will pick products out of this table and process it. Each service should select the earliest 10 products in the ITEM table.

I can choose top ten rows by using this inside a Saved Procedure:

select * from (
    select  item_id, row_number() over (order by CREATED_DATE) rownumber
    FROM item )
where rownumber < 11

Because so many service should make use of this, I'm using select ... for update to update the rows as "processing". However the below FOR UPDATE statement, fails for that above choose statement with error "ORA-02014: cannot choose FOR UPDATE from view with DISTINCT, GROUP BY, etc."

OPEN items_cursor FOR
**select Statement**

Help me having a solution.

All of this for the situation?

  FROM item
 WHERE (item_id,created_date) IN
       (SELECT item_id,created_date
          FROM (SELECT item_id, created_date
                     , ROW_NUMBER() OVER (ORDER BY created_date) rownumber
                  FROM item)
         WHERE rownumber < 11)

You should use skip locked along with a counter to do this, as lengthy while you don't always need each session to obtain contiguous rows. For instance:

    l_cursor sys_refcursor;
    l_name all_objects.object_name%type;
    l_found pls_integer := 0;
    open l_cursor for
        select  object_name
        from all_objects
        order by created
        for update skip locked;

        fetch l_cursor into l_name;
        dbms_output.put_line(l_fetches || ':' || l_name);
        if l_cursor%found then
            l_found := l_found + 1;
            -- dbms_lock.sleep(1);
        end if;
        exit when l_cursor%notfound or l_found = 10;
    end loop;

Should you run this concurrently from two periods they'll get different objects (though you may want to let the call to dbms_lock.sleep within the found block to really make it slow enough to become visible).

Based on this post, when utilizing skip locked the chosen rows aren't locked until they are fetched, and then any rows locked by another session following the cursor is opened up are simply overlooked.

DCookie's answer does not solve multisession processing (it is simply FOR UPDATE syntax fix). Should you will not manipulate rownumber range, every demonstration of service if likely to choose for update exactly the same rows. Should you execute that_for_update_choose in 2 periods, the 2nd one will hold back until first finishes the transaction. Parallel processing is going to be an illusion.

I'd consider efficient bulk processing along with for update skip locked approach. My answer below:

  con_limit constant number default 10;
  cursor cItems is
    select i.item_id, i.created_date
    from item i
    order by i.created_date
    for update skip locked;
  type t_cItems is table of cItems%rowtype;
  tItems t_cItems;
  open cItems;
  while true loop
    fetch cItems bulk collect into tItems limit con_limit;
    -- processing tItems
    exit when tItems.count < con_limit;
  end loop;

Possible lengthy transaction might be a disadvantage. Think about using Oracle Streams Advanced Queuing (DBMS_AQ) instead of this solution.