7552

What are the differences between these two, and which one should I use?

string s = "Hello world!";
String s = "Hello world!";
17
  • 111
    @O.R.Mapper, but the fact remains that string is a lexical construct of the C# grammar whereas System.String is just a type. Regardless of any explicit difference mentioned in any spec, there is still this implicit difference that could be accomodated with some ambiguity. The language itself must support string in a way that the implementation is not (quite) so obligated to consider for a particular class in the BCL.
    – Kirk Woll
    Commented Dec 2, 2014 at 3:05
  • 150
    @KirkWoll: According to the language specification, the language itself must consider string to be exactly the same as the BCL type System.String, nothing else. That is not ambiguous at all. Of course, you can implement your own compiler, using the C# grammar, and use all of the tokens found like that for something arbitrary, unrelated to what is defined in the C# language specification. However, the resulting language would only be a C# lookalike, it could not be considered C#. Commented Dec 2, 2014 at 8:22
  • 126
    You can use string without a using directive for System. You can't do that with String.
    – Wilsu
    Commented Nov 30, 2015 at 8:52
  • 28
    For someone coming from Algol and Fortran, this discussion shows there is something wrong with string. It is needed to abbreviate System.String, but, as an alias, it seems quite like, but not exactly the same thing. After several years of C#, though, I'd say, it is safe to simply use string and string.Format() and not to worry about System.String.
    – Roland
    Commented Dec 20, 2016 at 0:24
  • 28
    @Sangeeta What are you saying? The System.String class is still there, and the string keyword is still an alias for it. Just like System.Int32 and int. They are literally the same thing. Commented Dec 8, 2018 at 2:14

68 Answers 68

7080

string is an alias in C# for System.String.
So technically, there is no difference. It's like int vs. System.Int32.

As far as guidelines, it's generally recommended to use string any time you're referring to an object.

e.g.

string place = "world";

Likewise, I think it's generally recommended to use String if you need to refer specifically to the class.

e.g.

string greet = String.Format("Hello {0}!", place);

This is the style that Microsoft tends to use in their examples.

It appears that the guidance in this area may have changed, as StyleCop now enforces the use of the C# specific aliases.

11
  • 197
    If you decide to use StyleCop and follow that, that will say to use the types specific to the language. So for C# you'll have string (instead of String), int (instead of Int32), float (instead of Single) - stylecop.soyuz5.com/SA1121.html Commented May 22, 2012 at 22:36
  • 194
    I always use the aliases because I've assumed one day it might come in handy because they are acting as an abstraction, so therefore can have their implementations changed without me having to know.
    – Rob
    Commented Oct 12, 2012 at 23:25
  • 63
    Visual Studio 2015 says that String.Format should be changed to string.Format, so I guess Microsoft is going that way. I have also always used String for the static methods. Commented Dec 22, 2014 at 5:21
  • 5
    What do you say to the fact that you could define your own type “String” but can’t do the same for “string” as it’s a keyword, as explained in stackoverflow.com/questions/7074/…
    – jmoreno
    Commented Oct 13, 2020 at 21:37
  • 5
    I gues then... Just be conistent. Use string or String or use a cerntain one in a specific case, but always in that case.
    – Rob L
    Commented Nov 29, 2020 at 6:50
3818

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

5
  • What does 'not a primitive type in the CLR' mean? That if you use decimals in your C# code, it won't be interoperable with other .NET languages like BASIC?
    – Haighstrom
    Commented Aug 7, 2022 at 12:14
  • @Haighstrom: It's not a matter of it not being interoperable - it's that the CLR instruction set doesn't have the concept of decimal. It's not a primitive type in that respect.
    – Jon Skeet
    Commented Aug 7, 2022 at 15:12
  • I guess my use of 'interoperable' was incorrect, but my question is what does decimal not being in the CLR mean on a practical level? i.e. what does writing decimal in my C# code stop me doing? It makes the .dll that I create from it CLS non-compliant, which means I can't import it into a BASIC project?
    – Haighstrom
    Commented Aug 13, 2022 at 19:28
  • 2
    @Haighstrom: As one example, it means that decimal values can't be used in attributes. (I don't believe the use of decimal is CLS-non-compliant though.)
    – Jon Skeet
    Commented Aug 14, 2022 at 8:27
  • I guess also there is now an alias for IntPtr (nint).
    – Skint007
    Commented May 12 at 21:53
836

String stands for System.String and it is a .NET Framework type. string is an alias in the C# language for System.String. Both of them are compiled to System.String in IL (Intermediate Language), so there is no difference. Choose what you like and use that. If you code in C#, I'd prefer string as it's a C# type alias and well-known by C# programmers.

I can say the same about (int, System.Int32) etc..

10
  • 17
    I personally prefer using "Int32", since it immediately shows the range of the value. Imagine if they upgraded the type of "int" on later higher-bit systems. 'int' in c is apparently seen as "the integer type that the target processor is most efficient working with", and defined as "at least 16 bit". I'd prefer predictable consistency there, thank you very much.
    – Nyerguds
    Commented Apr 28, 2016 at 11:41
  • 8
    @MyDaftQuestions I concur. If anything it would make sense to consistently use the .net types because they are language ignorant and the type is obvious, independent of any language (do I know all of F#'s or VB's idiosyncrasies?). Commented Jan 21, 2017 at 17:39
  • 24
    @Nyerguds There are two reasons to simply not worry about it. One is that int is defined in the C# language spec as a 32 bit integer regardless of the hardware. C#, despite a shared heritage in the mists of time, is not actually C. Changing int to a 64 bit integer would be a breaking change in the specification and the language. It would also require redefining long, as long is currently the 64 bit integer. The other reason not to worry is irrelevant since the types will never change, but .NET is just abstract enough that 99% of the time you don't have to think about it anyway. ;-) Commented Dec 8, 2018 at 2:47
  • 16
    @Craig I dig into lots of old proprietary game formats where I do have to think about that all the time, though. And then using Int16, Int32 and Int64 is a lot more transparent in the code than using the rather nondescriptive short, int and long
    – Nyerguds
    Commented Dec 9, 2018 at 2:29
  • 9
    But short, not, long, float, double, et al are descriptive, because they’re in the language spec. C# is not C. I prefer them on declarations because they’re concise, small, and aesthetically pleasing. I do prefer the Torre library names on API’s where the API has a data type dependency. Commented Oct 14, 2020 at 15:49
596

The best answer I have ever heard about using the provided type aliases in C# comes from Jeffrey Richter in his book CLR Via C#. Here are his 3 reasons:

  • I've seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# the string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used.
  • In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does in fact treat long as an Int32. Someone reading source code in one language could easily misinterpret the code's intention if he or she were used to programming in a different programming language. In fact, most languages won't even treat long as a keyword and won't compile code that uses it.
  • The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it's legal to write the following code, the line with float feels very unnatural to me, and it's not obvious that the line is correct:
BinaryReader br = new BinaryReader(...);
float val  = br.ReadSingle(); // OK, but feels unnatural
Single val = br.ReadSingle(); // OK and feels good

So there you have it. I think these are all really good points. I however, don't find myself using Jeffrey's advice in my own code. Maybe I am too stuck in my C# world but I end up trying to make my code look like the framework code.

0
527

string is a reserved word, but String is just a class name. This means that string cannot be used as a variable name by itself.

If for some reason you wanted a variable called string, you'd see only the first of these compiles:

StringBuilder String = new StringBuilder();  // compiles
StringBuilder string = new StringBuilder();  // doesn't compile 

If you really want a variable name called string you can use @ as a prefix:

StringBuilder @string = new StringBuilder();

Another critical difference: Stack Overflow highlights them differently.

4
  • 12
    string is also more efficient that String. It has fewer pixels, which consumes less energy.
    – Phil B
    Commented May 6, 2022 at 18:53
  • 7
    Important to note regarding @Phil B's comment. This is only relevant on OLED monitors when programming in dark mode. For LCD monitors or light mode, the energy consumed is identical.
    – Vapid
    Commented Jun 14, 2022 at 14:12
  • 6
    @VapidLinus Not exactly, on a LCD, energy is spent to hide an otherwise iluminated pixel, so more dark pixels means more energy spent! This is oposite to CRT monitors. So on light mode, OLED/CRT more pixels on the "S"=less energy, LCD more pixels on the "S"=more energy. Dark mode as you stated is the oposite. Although the measurement should be taken with a lot o precision, and the difference could be hidden in noise.
    – fbiazi
    Commented Oct 24, 2022 at 14:29
  • 2
    I think the energy spent during the time used to think about this is more than what can be saved by the difference here. Commented Dec 20, 2022 at 11:37
477

There is one difference - you can't use String without using System; beforehand.

Updated:

"String" with a capital "S" is a keyword that refers to the built-in string data type in the .NET Framework's Base Class Library. It is a reference type that represents a sequence of characters.

On the other hand, "string" with a lowercase "s" is an alias for the "System.String" type, which means they are essentially the same thing. The use of "string" is just a shorthand way of referring to the "System.String" type, and it is used more commonly in C# code.

Both "String" and "string" are interchangeable in C#, and you can use either one to declare a variable of type string.

String myString = "Hello World"; // using the String keyword
string myString = "Hello World"; // using the string alias

However, it is recommended to use the "string" alias in C# code for consistency with the rest of the language's syntax and convention.

Here you can read more about C# String

4
  • True but it states string is an alias for System.String not for String (learn.microsoft.com/en-us/dotnet/csharp/language-reference/…)
    – Wouter
    Commented Feb 26, 2023 at 19:24
  • not true, at least, not any more (I just checked!) Commented Feb 27, 2023 at 17:33
  • 1
    @OluwadamilolaAdegunwa That's probably because you are building with Implicit Usings enabled
    – poizan42
    Commented Mar 30, 2023 at 13:20
  • FWIW what you can say about String vs string can also be said about int and Int32 as well as all other "basic" types... each has an assigned language keyword e.g. "int" and a corresponding Framework definition e.g. "Int32"; they are all equivalent in terms of practical use.
    – Zenilogix
    Commented Jan 4 at 15:04
362

It's been covered above; however, you can't use string in reflection; you must use String.

4
  • 25
    I do not understand what this answer means and why it was upvoted. You can use typeof(string) in reflection. Example one: if (someMethodInfo.ReturnType == typeof(string)) { ... } Example two: var p = typeof(string).GetProperty("FirstChar", BindingFlags.NonPublic | BindingFlags.Instance); Where is it that you must use String, not string? If you try things like Type.GetType("String") or Type.GetType("string"), neither will find the class because the namespace is missing. If for some silly reason you compare .Name of a type to "string" in a case-sensitive way, you are right. Commented May 24, 2019 at 12:04
  • Please explain.
    – Wouter
    Commented Jul 21, 2022 at 7:40
  • I don't think this is true. The compiler changes string into global::System.String so there's no reason string couldn't be used in reflection.
    – Plaje
    Commented May 15, 2023 at 13:02
  • Lemmings, upvoting just because someone else upvoted. Commented Jun 21 at 13:21
316

System.String is the .NET string class - in C# string is an alias for System.String - so in use they are the same.

As for guidelines I wouldn't get too bogged down and just use whichever you feel like - there are more important things in life and the code is going to be the same anyway.

If you find yourselves building systems where it is necessary to specify the size of the integers you are using and so tend to use Int16, Int32, UInt16, UInt32 etc. then it might look more natural to use String - and when moving around between different .net languages it might make things more understandable - otherwise I would use string and int.

0
251

I prefer the capitalized .NET types (rather than the aliases) for formatting reasons. The .NET types are colored the same as other object types (the value types are proper objects, after all).

Conditional and control keywords (like if, switch, and return) are lowercase and colored dark blue (by default). And I would rather not have the disagreement in use and format.

Consider:

String someString; 
string anotherString; 
1
  • Your preference is based on syntax color highlighting? Hahaha Commented Jun 21 at 13:22
227

string and String are identical in all ways (except the uppercase "S"). There are no performance implications either way.

Lowercase string is preferred in most projects due to the syntax highlighting

0
227

This YouTube video demonstrates practically how they differ.

But now for a long textual answer.

When we talk about .NET there are two different things one there is .NET framework and the other there are languages (C#, VB.NET etc) which use that framework.

enter image description here

"System.String" a.k.a "String" (capital "S") is a .NET framework data type while "string" is a C# data type.

enter image description here

In short "String" is an alias (the same thing called with different names) of "string". So technically both the below code statements will give the same output.

String s = "I am String";

or

string s = "I am String";

In the same way, there are aliases for other C# data types as shown below:

object: System.Object, string: System.String, bool: System.Boolean, byte: System.Byte, sbyte: System.SByte, short: System.Int16 and so on.

Now the million-dollar question from programmer's point of view: So when to use "String" and "string"?

The first thing to avoid confusion use one of them consistently. But from best practices perspective when you do variable declaration it's good to use "string" (small "s") and when you are using it as a class name then "String" (capital "S") is preferred.

In the below code the left-hand side is a variable declaration and it is declared using "string". On the right-hand side, we are calling a method so "String" is more sensible.

string s = String.ToUpper() ;
0
222

C# is a language which is used together with the CLR.

string is a type in C#.

System.String is a type in the CLR.

When you use C# together with the CLR string will be mapped to System.String.

Theoretically, you could implement a C#-compiler that generated Java bytecode. A sensible implementation of this compiler would probably map string to java.lang.String in order to interoperate with the Java runtime library.

0
199

Lower case string is an alias for System.String. They are the same in C#.

There's a debate over whether you should use the System types (System.Int32, System.String, etc.) types or the C# aliases (int, string, etc). I personally believe you should use the C# aliases, but that's just my personal preference.

2
  • 7
    That's the problem, they are not 'C#' aliases, they are 'C' aliases. There is no native 'string' or 'int' in the C# language, just syntactic sugar.
    – Quark Soup
    Commented May 29, 2015 at 20:23
  • 20
    not sure where "C" came from here, since C# 5 language specification reads "The keyword string is simply an alias for the predefined class System.String." on page 85, paragraph 4.2.4. All high level languages are syntactic sugar over CPU instruction sets and bytecode.
    – aiodintsov
    Commented Feb 24, 2016 at 6:57
177

string is just an alias for System.String. The compiler will treat them identically.

The only practical difference is the syntax highlighting as you mention, and that you have to write using System if you use String.

1
  • 22
    You do have to include a using System when using String, otherwise you get the following error: The type or namespace name 'String' could not be found (are you missing a using directive or an assembly reference?)
    – Ronald
    Commented Oct 16, 2009 at 17:53
164

Both are same. But from coding guidelines perspective it's better to use string instead of String. This is what generally developers use. e.g. instead of using Int32 we use int as int is alias to Int32

FYI “The keyword string is simply an alias for the predefined class System.String.” - C# Language Specification 4.2.3 http://msdn2.microsoft.com/En-US/library/aa691153.aspx

1
138

As the others are saying, they're the same. StyleCop rules, by default, will enforce you to use string as a C# code style best practice, except when referencing System.String static functions, such as String.Format, String.Join, String.Concat, etc...

0
131

New answer after 6 years and 5 months (procrastination).

While string is a reserved C# keyword that always has a fixed meaning, String is just an ordinary identifier which could refer to anything. Depending on members of the current type, the current namespace and the applied using directives and their placement, String could be a value or a type distinct from global::System.String.

I shall provide two examples where using directives will not help.


First, when String is a value of the current type (or a local variable):

class MySequence<TElement>
{
  public IEnumerable<TElement> String { get; set; }

  void Example()
  {
    var test = String.Format("Hello {0}.", DateTime.Today.DayOfWeek);
  }
}

The above will not compile because IEnumerable<> does not have a non-static member called Format, and no extension methods apply. In the above case, it may still be possible to use String in other contexts where a type is the only possibility syntactically. For example String local = "Hi mum!"; could be OK (depending on namespace and using directives).

Worse: Saying String.Concat(someSequence) will likely (depending on usings) go to the Linq extension method Enumerable.Concat. It will not go to the static method string.Concat.


Secondly, when String is another type, nested inside the current type:

class MyPiano
{
  protected class String
  {
  }

  void Example()
  {
    var test1 = String.Format("Hello {0}.", DateTime.Today.DayOfWeek);
    String test2 = "Goodbye";
  }
}

Neither statement in the Example method compiles. Here String is always a piano string, MyPiano.String. No member (static or not) Format exists on it (or is inherited from its base class). And the value "Goodbye" cannot be converted into it.

3
  • 1
    Your examples are slightly contrived, but only slightly. I would consider both to be indicative of design problems, but in legacy code it's quite conceivable.
    – ClickRick
    Commented Sep 3, 2021 at 22:47
  • Of course, if you got a variable named string, things also don't compile. Commented Dec 20, 2022 at 11:39
  • Such a name will have to be written as @string in C#. It means string seen as an ordinary identifier (not the keyword). It is not recommended to use names like @string, but if you need to access a member written in another .NET language where the name string is not special, the @ trick becomes useful. Commented Dec 21, 2022 at 15:21
112

Using System types makes it easier to port between C# and VB.Net, if you are into that sort of thing.

0
101

Against what seems to be common practice among other programmers, I prefer String over string, just to highlight the fact that String is a reference type, as Jon Skeet mentioned.

1
  • Good point. If 'string' was not invented, we would not have any confusion and not need this pointless discussion. All our apps would just run fine with String. 'int' seems useful if you don't care about the bit size, which happens most of the time, and 'string' seems only added for consistency.
    – Roland
    Commented Apr 20, 2021 at 12:20
95

string is an alias (or shorthand) of System.String. That means, by typing string we meant System.String. You can read more in think link: 'string' is an alias/shorthand of System.String.

0
93

I'd just like to add this to lfousts answer, from Ritchers book:

The C# language specification states, “As a matter of style, use of the keyword is favored over use of the complete system type name.” I disagree with the language specification; I prefer to use the FCL type names and completely avoid the primitive type names. In fact, I wish that compilers didn’t even offer the primitive type names and forced developers to use the FCL type names instead. Here are my reasons:

  • I’ve seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used. Similarly, I’ve heard some developers say that int represents a 32-bit integer when the application is running on a 32-bit OS and that it represents a 64-bit integer when the application is running on a 64-bit OS. This statement is absolutely false: in C#, an int always maps to System.Int32, and therefore it represents a 32-bit integer regardless of the OS the code is running on. If programmers would use Int32 in their code, then this potential confusion is also eliminated.

  • In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does treat long as an Int32. Someone reading source code in one language could easily misinterpret the code’s intention if he or she were used to programming in a different programming language. In fact, most languages won’t even treat long as a keyword and won’t compile code that uses it.

  • The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it’s legal to write the following code, the line with float feels very unnatural to me, and it’s not obvious that the line is correct:

    BinaryReader br = new BinaryReader(...);
    float val = br.ReadSingle(); // OK, but feels unnatural
    Single val = br.ReadSingle(); // OK and feels good
    
  • Many programmers that use C# exclusively tend to forget that other programming languages can be used against the CLR, and because of this, C#-isms creep into the class library code. For example, Microsoft’s FCL is almost exclusively written in C# and developers on the FCL team have now introduced methods into the library such as Array’s GetLongLength, which returns an Int64 value that is a long in C# but not in other languages (like C++/CLI). Another example is System.Linq.Enumerable’s LongCount method.

I didn't get his opinion before I read the complete paragraph.

83

String (System.String) is a class in the base class library. string (lower case) is a reserved work in C# that is an alias for System.String. Int32 vs int is a similar situation as is Boolean vs. bool. These C# language specific keywords enable you to declare primitives in a style similar to C.

82

@JaredPar (a developer on the C# compiler and prolific SO user!) wrote a great blog post on this issue. I think it is worth sharing here. It is a nice perspective on our subject.

string vs. String is not a style debate

[...]

The keyword string has concrete meaning in C#. It is the type System.String which exists in the core runtime assembly. The runtime intrinsically understands this type and provides the capabilities developers expect for strings in .NET. Its presence is so critical to C# that if that type doesn’t exist the compiler will exit before attempting to even parse a line of code. Hence string has a precise, unambiguous meaning in C# code.

The identifier String though has no concrete meaning in C#. It is an identifier that goes through all the name lookup rules as Widget, Student, etc … It could bind to string or it could bind to a type in another assembly entirely whose purposes may be entirely different than string. Worse it could be defined in a way such that code like String s = "hello"; continued to compile.

class TricksterString { 
  void Example() {
    String s = "Hello World"; // Okay but probably not what you expect.
  }
}

class String {
  public static implicit operator String(string s) => null;
}

The actual meaning of String will always depend on name resolution. That means it depends on all the source files in the project and all the types defined in all the referenced assemblies. In short it requires quite a bit of context to know what it means.

True that in the vast majority of cases String and string will bind to the same type. But using String still means developers are leaving their program up to interpretation in places where there is only one correct answer. When String does bind to the wrong type it can leave developers debugging for hours, filing bugs on the compiler team, and generally wasting time that could’ve been saved by using string.

Another way to visualize the difference is with this sample:

string s1 = 42; // Errors 100% of the time  
String s2 = 42; // Might error, might not, depends on the code

Many will argue that while this is information technically accurate using String is still fine because it’s exceedingly rare that a codebase would define a type of this name. Or that when String is defined it’s a sign of a bad codebase.

[...]

You’ll see that String is defined for a number of completely valid purposes: reflection helpers, serialization libraries, lexers, protocols, etc … For any of these libraries String vs. string has real consequences depending on where the code is used.

So remember when you see the String vs. string debate this is about semantics, not style. Choosing string gives crisp meaning to your codebase. Choosing String isn’t wrong but it’s leaving the door open for surprises in the future.

Note: I copy/pasted most of the blog posts for archive reasons. I ignore some parts, so I recommend skipping and reading the blog post if you can.

0
81

It's a matter of convention, really. string just looks more like C/C++ style. The general convention is to use whatever shortcuts your chosen language has provided (int/Int for Int32). This goes for "object" and decimal as well.

Theoretically this could help to port code into some future 64-bit standard in which "int" might mean Int64, but that's not the point, and I would expect any upgrade wizard to change any int references to Int32 anyway just to be safe.

78

String is not a keyword and it can be used as Identifier whereas string is a keyword and cannot be used as Identifier. And in function point of view both are same.

77

Coming late to the party: I use the CLR types 100% of the time (well, except if forced to use the C# type, but I don't remember when the last time that was).

I originally started doing this years ago, as per the CLR books by Ritchie. It made sense to me that all CLR languages ultimately have to be able to support the set of CLR types, so using the CLR types yourself provided clearer, and possibly more "reusable" code.

Now that I've been doing it for years, it's a habit and I like the coloration that VS shows for the CLR types.

The only real downer is that auto-complete uses the C# type, so I end up re-typing automatically generated types to specify the CLR type instead.

Also, now, when I see "int" or "string", it just looks really wrong to me, like I'm looking at 1970's C code.

0
60

There is no difference.

The C# keyword string maps to the .NET type System.String - it is an alias that keeps to the naming conventions of the language.

Similarly, int maps to System.Int32.

0
53

There's a quote on this issue from Daniel Solis' book.

All the predefined types are mapped directly to underlying .NET types. The C# type names (string) are simply aliases for the .NET types (String or System.String), so using the .NET names works fine syntactically, although this is discouraged. Within a C# program, you should use the C# names rather than the .NET names.

47

string is a keyword, and you can't use string as an identifier.

String is not a keyword, and you can use it as an identifier:

Example

string String = "I am a string";

The keyword string is an alias for System.String aside from the keyword issue, the two are exactly equivalent.

 typeof(string) == typeof(String) == typeof(System.String)
1
  • 3
    The only tiny difference is that if you use the String class, you need to import the System namespace on top of your file, whereas you don’t have to do this when using the string keyword.
    – Techiemanu
    Commented Mar 4, 2018 at 9:29
45

Yes, that's no difference between them, just like the bool and Boolean.

Not the answer you're looking for? Browse other questions tagged or ask your own question.