Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4136

February 5th, 2014 07:00

Oracle-related questions for a refresh..

As I travel around meeting with customers and account teams, one of the questions I get (When attempting to crack an underpen account, where Oracle is currently on either Netapp, HDS, IBM, or HP ) is a list of questions to ask the customer (when Oracle is driving the sale)

I've compiled a non-comprehensive list here... And the order of the questions does change.. Would like to hear from you all what you ask the customer in the first meeting..

1. What versions & releases of Oracle do they have ?

2. How large are their databases and their Oracle environment ?

3. Do they use ASM ?

4. How do they enable high-availability and scalability for Oracle ? Do they have RAC, or are they evaluating RAC ? Or did they have RAC and move off ?

5. What is their DR strategy for Oracle

6. Do they run Oracle EBS, Peoplesoft or other Oracle applications

7. Do they virtualize Oracle ? (Using VMware or Oracle OVM ) ?

8. Which operating system do they primarily run Oracle on ? If OEL, do they use the UEK ?

9. How do they do snaps and clones ?

10. What is their backup strategy

Thanks

109 Posts

February 6th, 2014 10:00

I have really enjoyed reading this discussion and believe there is an ongoing race between FC and Gigabit ethernet. At the moment I would give the edge towards gigabit ethernet and believe in the near future we will see 100 GbE ethernet in data centers. A very good question to ask customers could be, "Are you considering 40 for 100 Gigabit Ethernet in your datacenter?" Not a question that applies directly to storage but play an important role is discussing storage solutions but not only with storage! Faster ethernet is changing how we architect virtualization, databases, storage and online gaming! I've been through many distruptive waves of technology and I'm watching ethernet as another game changer.

256 Posts

February 5th, 2014 08:00

Saty:

You are missing the storage protocol-related questions. Not surprising. Most EMC guys are what we used to call "SAN bigots". I come from a different place: I was at NetApp for 8 years before EMC, and in that role was largely responsible for the initial technical work in the area of Oracle on NFS. Anyway, the relevant questions would be:

  • IP storage vs. SAN (obviously).
  • If IP storage then dNFS vs. iSCSI (which will really be rare, other than on Windows where it is fairly common in some shops).
  • If SAN then of course ASM vs. ext, NTFS, Veritas, etc. Lots of blind ends there.

Regards,

Jeff

February 5th, 2014 12:00

Great point Jeff.. I think a lot of SEs may not be aware of the advantages (perceived or otherwise) of having NFS mounted directories for Oracle, or shared NFS (single-code base) for Oracle.. Now of course there's disadvantages of having a single Oracle home (code tree) for a RAC environment as well..

But from a IP vs SAN perspective, or ASM vs ext or 3rd party FS, these are areas we & the account SEs should be finding out more about as well..

Thanks

-Saty

256 Posts

February 5th, 2014 13:00

Saty:

I am not sure the advantages are that compelling any more. But at the time (late 90s and early 2000s), NFS was a killer feature, and when 10g shipped with dNFS it was officially over: NFS was mainstream.

The advantages (at the time) were:

  • Reduced port cost vs. FC (i.e., switches, HBAs, etc. were more expensive than equivalent ethernet gear). Is that still true? Barely. I would say in an era of 10 GbE / 16 Gb FC, we are about to get to a point where the wire really doesn't matter. And many switches allow you to install a port module with either ethernet or FC ports on it.
  • Snapshots were very popular with NetApp early on. The perception was that NetApp snaps were somehow better than ours. Hogwash. I had to laugh when NetApp finally came out with read / write snaps, as if that was something great. Our snaps were always read / write. Yes, NetApp's snaps have no write performance penalty. But they have (effectively) a big sequential read penalty, and so, bottom line, it's about even in my view. No advantage for NFS there either, now.
  • Perceived simplicity / ease of management of IP vs. FC. Again, is this really true now? I don't see any significant difference between administering an FC port vs. an ethernet port on a switch (say Cisco MDS) which allows either one.

Oracle has muddied the waters a bit by adding features to dNFS which ASM lacks. Arguably, ASM is falling behind in that area. dNFS clonedb is an utterly charming feature, for example, that ASM presently does not have.

Having said that, at least in an EMC-centric view, FC is way cooler. On FC you can do VPLEX, RecoverPoint, Timefinder, etc. On NFS, you have the old Celerra Replicator and Celerra Snapsure checkpoint stuff. Which is just not as cool as VPLEX or RecoverPoint. Not even close.

Eventually, once we get to software defined networking, the network protocol no longer matters. And we are very close to that point.

Regards,

Jeff

643 Posts

February 6th, 2014 02:00

All great points!

I would like to recommend our Oracle Sales Playbook with a list of comprehensive questions to end users, application owners and CXOs from high level messages to technical details.

February 6th, 2014 11:00

Great idea Simon..

Let's collect these responses and add others..

Thanks

15 Posts

February 10th, 2014 12:00

Hi Sam,

I agree with you but I think it does apply to storage, more and more I see startups trying to work on this space and they are starting to see un uptick on smaller accounts, yes SuperFast Ethernet is coming and sooner than we all expect...

27 Posts

February 11th, 2014 01:00

I would like to add some more questions here :-

1) What should be the difference in the time lag between the Primary and the DR ?

2) What is the maximum possible time that the downtime can be allowd ?

3) What will be the recovery stategy ? How frequently the DR test will be performed ?

Based on the input we can strategise the oracle solutions.


28 Posts

February 11th, 2014 06:00

Saty,

Good set of questions to get the conversation going with customers and very similar to what I have been asking. How about posting this to the Oracle PreSales Minors site?

Yes folks, EMC is actively training its presales technical teams on the Oracle stack and on Oracle and EMC integration. The goal is for EMC to fully engage the Oracle owners and expand EMC's existing global Oracle Specialists team.

bg

109 Posts

February 11th, 2014 10:00

WBGaynor,

Good call on posting those questions to the Oracle Pre-sales Minor community!

February 11th, 2014 14:00

Bill.. Thanks.. Great idea...

February 28th, 2014 07:00

To continue this thread, and to consolidate (Before posting on the Oracle Minors site), in light of the additional options available to customers for Oracle workloads, (Flash technology), we should also ask

1. What their current Oracle IO profile is (and then position either a complimentary Oracle AWR analysis through MiTrend, or the DBClassify Professional Service (depending on the propensity of the customer to purchase it)

2. Will they benefit from all-Flash arrays and technologies, and / or will you benefit from sub-millisecond response times being offered out-of-the-box by all-Flash arrays such as XtremIO ?

3. Are you evaluating other all-Flash arrays such as Pure, or Flash card technologies like Fusion-IO ?

4. What array are they today for Oracle ? (And does it primarily have one type of disk, say SAS, or does it have tiering capabilities ? Or do they pin specific 'hot' tablespaces to faster disk (say EFD)

Thanks..

256 Posts

February 28th, 2014 08:00

Sam:

Eventually, I think when we get to the Software Defined Data Center (SDDC), then the wire is irrelevant and the protocol arguably no longer matters (or perhaps a better way to say it would be that you can run whatever protocol you like on whatever wire you like).

Cisco now makes switches which allow you to change out the module which determines the wire and protocol. MDS, for example, can switch on the fly between FC and IP / 10 GbE. Once we get to the point where we can do that in software (and we are very close to that point right now), then the entire protocol argument is over.

Which has primarily been about latency and bandwidth anyway. The benchmark for a performant network technology at the moment is Infiniband (IB). In a pre-SDDC context, IB provides theoretically about 40 GbE of throughput with small single-digit millisecond to low triple-digit microseconds in latency. (See: InfiniBand - Wikipedia, the free encyclopedia.) But after SDDC, IP / Ethernet (or whatever they are going to call it in the post-SDDC context) will provide 100 GbS of throughput with very similar latency as IB.

At that point, the network is no longer the bottleneck, arguably. We will flood the bus, and the CPU layer will then become the bottleneck for a while.

It will be interesting soon!

Regards,

Jeff

No Events found!

Top